Best video editing apps of 2024
When Automation And AI Are Better Than People For Software Deployment
Whether for high-speed trains or Python deployments, automated failsafes are already necessary, and ... [+] will only become more so as complexity and speed increase.
gettyThis piece is about secure deployment of enterprise software, but I want to set the scene with an analogy. A little while back, a company where I worked partnered with a systems integrator in Europe that was deploying high-speed train systems. One of the key implementation requirements was a highly reliable mesh of sensors and cameras throughout the rail system. The rationale was that train engineers were unable to stop trains fast enough using only their own vision and reflexes. By the time the engineer saw something on the tracks, it was too late to stop the train. So engineers needed a new set of "eyes" to maximize the potential speed of the train.
Solutions like this are not uncommon in physical operational environments. Safety systems are regularly deployed in factories that can shut down a machine faster than any person could when there is a problem. For that matter, we see this all the time in fire-extinguishing systems inside commercial vehicles and, closer to home, airbags in our cars.
Digital versions of automated safety systems for software development are now becoming more common as we see massive scale from cloud computing and increasingly high-velocity automation driven by AI. And while we should be cautious about the potential negative impacts of AI and automation for things like disinformation or replacing jobs, we also must accept that there are some needful tasks that humans will not be able to perform as we become even more reliant on computing. In other words, it is inevitable that we will need automation to control our automation.
Catching Software Vulnerabilities Before Exploits Can HappenThis brings me to an interesting story I recently heard from software supply chain platform company JFrog. Like a lot of software infrastructure companies, JFrog began in the developer operations space, then broadened its offerings into other operational areas including cybersecurity. Security is a natural extension for JFrog because if you can catch and fix software security vulnerabilities prior to deployment or right after deployment, you can significantly mitigate the risk of a security breach.
In this case, the vulnerability was caused by a human error when a committer accidentally left an exploitable access file in Python. Had this back door been exploited, it would have left every Python system in the world vulnerable. Given the widespread use of Python, the impact could have been massive. (Think about StuxNet or the recent CrowdStrike issue in terms of magnitude.) JFrog's vice president of product marketing, Jens Eckels, wrote a great blog post about this event and how JFrog handled it.
The good news is that JFrog regularly performs R&D and testing for its DevSecOps tools on public repositories, and the team caught this problem as part of those routine efforts. They immediately notified the committer, who quickly resolved the issue—and the crisis was averted. This is of course great news.
Software Security Automation Has Both Technical And Human ElementsSince there was no catastrophic event, this story may seem anticlimactic, but there are a few lessons to be learned from what happened, how JFrog handled it and what it means in a hyper-automated future.
So, no matter if we're talking about an employee, a factory, a high-speed train, or your IDE, AI or any other form of automation should not simply replace what's already in place—but improve upon it.
What AI Regulations Mean For Software Developers
Both the US and the EU have mandated a risk-based approach to AI development. Whatever your risk level, ultimately it's all about transparency and security.
As organizations of all sizes and sectors race to develop, deploy or buy AI and LLM-based products and services, what are the things they should be thinking about from a regulatory perspective? And if you're a software developer, what do you need to know?
The regulatory approaches of the EU and US have, between them, firmed-up some of the more confusing areas. In the US, we've seen a new requirement that all US federal agencies have a chief AI officer and submit annual reports identifying all AI systems in use, any risks associated with them, and how they plan to mitigate those risks. This echoes the EU's requirements for similar risk, testing, and oversight before deployment in high-risk cases.
Both have adopted a risk-based approach, with the EU specifically identifying the importance of "Security by design and by default" for "High-risk AI systems." In the US, the CISA states that "Software must be secure by design, and Artificial Intelligence is no exception."
This is likely to be music to the ears of anyone familiar with proactive security. The more we do to reduce the friction between machine logic and human analysis, the more we can anticipate threats and mitigate them before they become a problem.
Code is at the coreAt a fundamental level, "Security by design, by default" begins with the software developer and the code being used to build AI models and applications. As AI development and regulation expand, the role of developers will evolve to include security as part of daily life. If you can't code, you can't use AI. And if you're a developer using AI, you're going to be keeping a closer eye than ever on weaknesses and security.
Hallucinated or deliberately poisoned software packages and libraries are already emerging as a very real threat. Software supply chain attacks that start life as malware on developer workstations could have significant, serious consequences for the security and integrity of data models, training sets, or even final products. It's worth noting that the malicious submissions that have dogged code repositories for years are already emerging on AI development platforms. With reports of massive volumes of data being exchanged between AI and machine learning environments such as Hugging Face (aka "The GitHub of AI") and enterprise applications, baking security in from the outset has never been more critical.
Article 15 of the EU's AI Act seeks to preempt this scenario by mandating measures to test, mitigate, and control risks including data or model poisoning. New guidance issued by the US includes any government-owned AI models, code, and data being made publicly available, unless they pose operational risk. With code simultaneously under scrutiny and under attack, organizations developing and deploying AI will need to get a firm handle on the weaknesses and risks in everything from AI libraries to devices.
Proactive security will be at the heart of driving security by design, as regulations increasingly require an ability to find weaknesses before they become vulnerabilities.
Innovation meets risk meets realityFor many organizations, the risk will vary depending on the data they use. For example, healthcare companies will need to ensure that data privacy, security, and integrity are maintained across all outputs. Financial services companies will be looking to balance benefits such as predictive monitoring against regulatory concerns for privacy and fairness.
Both the EU and US regulatory approaches place a heavy emphasis on privacy, protection of fundamental rights, and transparency. From a product development perspective, the requirements depend on the type of application:
Overall, under the EU AI Act, applications like ChatGPT aren't considered high risk (yet!) but they will have to ensure transparency around the use of AI, avoiding the generation of illegal content and the undisclosed use of copyrighted data in training models. Models with the capacity to pose systemic risk will be obliged to test prior to release—and to report any incidents.
For products, services, and applications in the US, the overarching approach is also risk-based—with a heavy emphasis on self-regulation.
The bottom line is that restrictions increase with each level. To comply with the EU AI Act, before any high-risk deployment, developers will have to pass muster with a range of requirements including risk management, testing, data governance, human oversight, transparency, and cybersecurity. If you're in the lower risk categories, it's all about transparency and security.
Proactive security: Where machine learning meets human intelligenceWhether you're looking at the EU AI Act, the US AI regulations, or NIST 2.0, ultimately everything comes back to proactive security, and finding the weaknesses before they metastasize into large-scale problems. A lot of that is going to start with code. If the developer misses something, or downloads a malicious or weak AI library, sooner or later that will manifest in a problem further up the supply chain. If anything, the new AI regulations have underlined the criticality of the issue—and the urgency of the challenges we face. Now is a good time to break things down and get back to the core principles of security by design.
Ram Movva is the chairman and chief executive officer of Securin Inc. Aviral Verma leads the Research and Threat Intelligence team at Securin.
—
Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld's technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.Com.
AI Development And Agile Don't Mix Well, Study Shows
Maxger/Getty ImagesAgile software development has long been seen as a highly effective way to deliver the software the business needs. The practice has worked well within many organizations for more than two decades. Agile is also the foundation for scrum, DevOps, and other collaborative practices. However, agile practices may fall short in artificial intelligence (AI) design and implementation.
That insight comes from a recent report by RAND Corporation, the global policy think tank, based on interviews with 65 data scientists and engineers with at least five years of experience building AI and machine-learning models in industry or academia. The research, initially conducted for the US Department of Defense, was completed in April 2024. "All too often, AI projects flounder or never get off the ground," said the report's co-authors, led by James Ryseff, senior technical policy analyst at RAND.
Also: Agile development can unlock the power of generative AI - here's how
Interestingly, several AI specialists see formal agile software development practices as a roadblock to successful AI. "Several interviewees (10 of 50) expressed the belief that rigid interpretations of agile software development processes are a poor fit for AI projects," the researchers found.
"While the agile software movement never intended to develop rigid processes -- one of its primary tenets is that individuals and interactions are much more important than processes and tools -- many organizations require their engineering teams to universally follow the same agile processes."
As a result, as one interviewee put it, "work items repeatedly had to either be reopened in the following sprint or made ridiculously small and meaningless to fit into a one-week or two-week sprint." In particular, AI projects "require an initial phase of data exploration and experimentation with an unpredictable duration."
Also: How your business can best exploit AI: Tell your board these 4 things
RAND's research suggested other factors can limit the success of AI projects. While IT failures have been well documented over the past few decades, AI failures take on an alternative complexion. "AI seems to have different project characteristics, such as costly labor and capital requirements and high algorithm complexity, that make them unlike a traditional information system," the study's co-authors said.
"The high-profile nature of AI may increase the desire for stakeholders to better understand what drives the risk of IT projects related to AI."
The RAND team identified the leading causes of AI project failure:
While formal agile practices may be too cumbersome for AI development, it's still critical for IT and data professionals to communicate openly with business users. Interviewees in the study recommended that "instead of adopting established software engineering processes -- which often amount to nothing more than fancy to-do lists -- the technical team should communicate frequently with their business partners about the state of the project."
Also: Time for businesses to move past generative AI hype and find real value
The report suggested: "Stakeholders don't like it when you say, 'it's taking longer than expected; I'll get back to you in two weeks.' They are curious. Open communication builds trust between the business stakeholders and the technical team and increases the likelihood that the project will ultimately be successful."Therefore, AI developers must ensure technical staff understand the project purpose and domain context: "Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Ensuring effective interactions between the technologists and the business experts can be the difference between success and failure for an AI project."
The RAND team also recommended choosing "enduring problems". AI projects require time and patience to complete: "Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year. If an AI project is not worth such a long-term commitment, it most likely is not worth committing to at all."
Also: When's the right time to invest in AI? 4 ways to help you decide
While focusing on the business problem and not the technology solution is crucial, organizations must invest in the infrastructure to support AI efforts, suggested the RAND report: "Up-front investments in infrastructure to support data governance and model deployment can substantially reduce the time required to complete AI projects and can increase the volume of high-quality data available to train effective AI models."
Finally, as noted above, the report suggested AI is not a magic wand and has limitations: "When considering a potential AI project, leaders need to include technical experts to assess the project's feasibility."
Artificial Intelligence How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work Why Claude's Artifacts is the coolest feature I've seen in generative AI so far Midjourney's AI-image generator website is now officially open to everyone - for free 5 free AI tools for school that students, teachers, and parents can use, too
Comments
Post a Comment