Best video editing apps of 2024



systems software developer :: Article Creator

When Automation And AI Are Better Than People For Software Deployment

Whether for high-speed trains or Python deployments, automated failsafes are already necessary, and ... [+] will only become more so as complexity and speed increase.

getty

This piece is about secure deployment of enterprise software, but I want to set the scene with an analogy. A little while back, a company where I worked partnered with a systems integrator in Europe that was deploying high-speed train systems. One of the key implementation requirements was a highly reliable mesh of sensors and cameras throughout the rail system. The rationale was that train engineers were unable to stop trains fast enough using only their own vision and reflexes. By the time the engineer saw something on the tracks, it was too late to stop the train. So engineers needed a new set of "eyes" to maximize the potential speed of the train.

Solutions like this are not uncommon in physical operational environments. Safety systems are regularly deployed in factories that can shut down a machine faster than any person could when there is a problem. For that matter, we see this all the time in fire-extinguishing systems inside commercial vehicles and, closer to home, airbags in our cars.

Digital versions of automated safety systems for software development are now becoming more common as we see massive scale from cloud computing and increasingly high-velocity automation driven by AI. And while we should be cautious about the potential negative impacts of AI and automation for things like disinformation or replacing jobs, we also must accept that there are some needful tasks that humans will not be able to perform as we become even more reliant on computing. In other words, it is inevitable that we will need automation to control our automation.

Catching Software Vulnerabilities Before Exploits Can Happen

This brings me to an interesting story I recently heard from software supply chain platform company JFrog. Like a lot of software infrastructure companies, JFrog began in the developer operations space, then broadened its offerings into other operational areas including cybersecurity. Security is a natural extension for JFrog because if you can catch and fix software security vulnerabilities prior to deployment or right after deployment, you can significantly mitigate the risk of a security breach.

In this case, the vulnerability was caused by a human error when a committer accidentally left an exploitable access file in Python. Had this back door been exploited, it would have left every Python system in the world vulnerable. Given the widespread use of Python, the impact could have been massive. (Think about StuxNet or the recent CrowdStrike issue in terms of magnitude.) JFrog's vice president of product marketing, Jens Eckels, wrote a great blog post about this event and how JFrog handled it.

The good news is that JFrog regularly performs R&D and testing for its DevSecOps tools on public repositories, and the team caught this problem as part of those routine efforts. They immediately notified the committer, who quickly resolved the issue—and the crisis was averted. This is of course great news.

Software Security Automation Has Both Technical And Human Elements

Since there was no catastrophic event, this story may seem anticlimactic, but there are a few lessons to be learned from what happened, how JFrog handled it and what it means in a hyper-automated future.

  • There are not enough eyes in the world to keep pace with software development — We are now in an age where applications are assemblies of services that live in a state of continuous deployment. That's why adding security operations into your CI/CD toolchain makes sense as part of an overall cybersecurity strategy. The job has steadily become too complex for humans to manage; indeed, if humans are still in the loop, the pace of change slows down too much—and the risk of error increases. In this context, JFrog's averted catastrophe is a great reminder that an attentive focus on security is at least as important as any part of the developer toolchain—if not more so.
  • More companies need to understand their development communities — Clearly JFrog's practice of scanning public repositories for problems with its software is not an entirely altruistic pursuit. The process leads to better products that are in turn sold to customers. However, JFrog's process also reflects the understanding it has of its developer community, which ties directly to how JFrog escalates issues when they are detected. According to Shachar Menashe, who leads security research at JFrog, when an issue is found, JFrog escalates in three possible ways. First, if there is a connection to an existing customer (no matter what product they use), JFrog notifies them via its own customer support team. Second, if it's not a customer but there is a name or e-mail attached to the project (such as an open source project lead), JFrog notifies that person via its R&D team. Third, if there is no known name or e-mail address, JFrog takes no action but will flag the issue with its customer support team as a precaution. There are some people in the development community who lament the declining spirit of collaboration and openness. But I think JFrog offers us an example of the ongoing give-and-take that comes from working together in a community. Most of the repos the team scans are not sales opportunities at all, but JFrog keeps helping others where it can, and in turn reaps the benefits of a better product—and greater community goodwill—in the long run.
  • Automation begets more automation — I recently had a great talk with a senior engineering leader at one of Moor Insights & Strategy's clients. The conversation was about where AI developer assistance is headed. But instead of talking mainly about the latest AI features, our talk shifted towards the human side of things. We both agreed that automation success—whether it comes from AI or something else—is ultimately a function of human trust. The more we trust something, the more we are willing to delegate complex work to it. For example, an experienced employee can take on a more complex project than the newbie on the team. But of course there are additional costs that come with that level of trust. The more experienced employee will need to be paid more or they may have more complex professional development goals. That doesn't make more experience bad or good. It's just a reminder that the higher costs of a more trusted resource are worth it to foster a successful project. There's an obvious parallel to the technical side of automation. As we begin to trust AI as a means to drive more productivity, we need to accept that there will also be adjacent new requirements such as responsibility guardrails required to maximize the technology's potential. Those requirements will inevitably cost something. They will also be automated, since AI is so scalable, and they likely will require some form of change management, since AI is so new.
  • So, no matter if we're talking about an employee, a factory, a high-speed train, or your IDE, AI or any other form of automation should not simply replace what's already in place—but improve upon it.


    What AI Regulations Mean For Software Developers

    Both the US and the EU have mandated a risk-based approach to AI development. Whatever your risk level, ultimately it's all about transparency and security.

    As organizations of all sizes and sectors race to develop, deploy or buy AI and LLM-based products and services, what are the things they should be thinking about from a regulatory perspective? And if you're a software developer, what do you need to know?

    The regulatory approaches of the EU and US have, between them, firmed-up some of the more confusing areas. In the US, we've seen a new requirement that all US federal agencies have a chief AI officer and submit annual reports identifying all AI systems in use, any risks associated with them, and how they plan to mitigate those risks. This echoes the EU's requirements for similar risk, testing, and oversight before deployment in high-risk cases.

    Both have adopted a risk-based approach, with the EU specifically identifying the importance of "Security by design and by default" for "High-risk AI systems." In the US, the CISA states that "Software must be secure by design, and Artificial Intelligence is no exception."

    This is likely to be music to the ears of anyone familiar with proactive security. The more we do to reduce the friction between machine logic and human analysis, the more we can anticipate threats and mitigate them before they become a problem.

    Code is at the core

    At a fundamental level, "Security by design, by default" begins with the software developer and the code being used to build AI models and applications. As AI development and regulation expand, the role of developers will evolve to include security as part of daily life. If you can't code, you can't use AI. And if you're a developer using AI, you're going to be keeping a closer eye than ever on weaknesses and security.

    Hallucinated or deliberately poisoned software packages and libraries are already emerging as a very real threat. Software supply chain attacks that start life as malware on developer workstations could have significant, serious consequences for the security and integrity of data models, training sets, or even final products. It's worth noting that the malicious submissions that have dogged code repositories for years are already emerging on AI development platforms. With reports of massive volumes of data being exchanged between AI and machine learning environments such as Hugging Face (aka "The GitHub of AI") and enterprise applications, baking security in from the outset has never been more critical.

    Article 15 of the EU's AI Act seeks to preempt this scenario by mandating measures to test, mitigate, and control risks including data or model poisoning. New guidance issued by the US includes any government-owned AI models, code, and data being made publicly available, unless they pose operational risk. With code simultaneously under scrutiny and under attack, organizations developing and deploying AI will need to get a firm handle on the weaknesses and risks in everything from AI libraries to devices.

    Proactive security will be at the heart of driving security by design, as regulations increasingly require an ability to find weaknesses before they become vulnerabilities.

    Innovation meets risk meets reality

    For many organizations, the risk will vary depending on the data they use. For example, healthcare companies will need to ensure that data privacy, security, and integrity are maintained across all outputs. Financial services companies will be looking to balance benefits such as predictive monitoring against regulatory concerns for privacy and fairness.

    Both the EU and US regulatory approaches place a heavy emphasis on privacy, protection of fundamental rights, and transparency. From a product development perspective, the requirements depend on the type of application:

  • Unacceptable risk: Systems that are considered a threat to humans will be banned, including government-run social scoring, biometric identification/categorization of people, and facial recognition. Some exceptions for law enforcement.
  • High risk: Systems with capacity to negatively impact safety or fundamental rights. These fall into two categories:
  • AI systems to be used in products that fall under EU product safety legislation including aviation, automotive, medical devices, and elevators.
  • AI systems that will be used in areas including critical infrastructure, education, employment, essential services, law enforcement, border control, or application of the law.
  • Low risk: Most current AI applications and services fall into this category, and will be unregulated. These include AI-enabled games, spam filters, basic language models for grammar-checking apps, etc.
  • Overall, under the EU AI Act, applications like ChatGPT aren't considered high risk (yet!) but they will have to ensure transparency around the use of AI, avoiding the generation of illegal content and the undisclosed use of copyrighted data in training models. Models with the capacity to pose systemic risk will be obliged to test prior to release—and to report any incidents.

    For products, services, and applications in the US, the overarching approach is also risk-based—with a heavy emphasis on self-regulation.

    The bottom line is that restrictions increase with each level. To comply with the EU AI Act, before any high-risk deployment, developers will have to pass muster with a range of requirements including risk management, testing, data governance, human oversight, transparency, and cybersecurity. If you're in the lower risk categories, it's all about transparency and security.

    Proactive security: Where machine learning meets human intelligence

    Whether you're looking at the EU AI Act, the US AI regulations, or NIST 2.0, ultimately everything comes back to proactive security, and finding the weaknesses before they metastasize into large-scale problems. A lot of that is going to start with code. If the developer misses something, or downloads a malicious or weak AI library, sooner or later that will manifest in a problem further up the supply chain. If anything, the new AI regulations have underlined the criticality of the issue—and the urgency of the challenges we face. Now is a good time to break things down and get back to the core principles of security by design.

    Ram Movva is the chairman and chief executive officer of Securin Inc. Aviral Verma leads the Research and Threat Intelligence team at Securin.

    Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld's technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.Com.


    AI Development And Agile Don't Mix Well, Study Shows

    Maxger/Getty Images

    Agile software development has long been seen as a highly effective way to deliver the software the business needs. The practice has worked well within many organizations for more than two decades. Agile is also the foundation for scrum, DevOps, and other collaborative practices. However, agile practices may fall short in artificial intelligence (AI) design and implementation. 

    That insight comes from a recent report by RAND Corporation, the global policy think tank, based on interviews with 65 data scientists and engineers with at least five years of experience building AI and machine-learning models in industry or academia. The research, initially conducted for the US Department of Defense, was completed in April 2024. "All too often, AI projects flounder or never get off the ground," said the report's co-authors, led by James Ryseff, senior technical policy analyst at RAND.   

    Also: Agile development can unlock the power of generative AI - here's how

    Interestingly, several AI specialists see formal agile software development practices as a roadblock to successful AI. "Several interviewees (10 of 50) expressed the belief that rigid interpretations of agile software development processes are a poor fit for AI projects," the researchers found. 

    "While the agile software movement never intended to develop rigid processes -- one of its primary tenets is that individuals and interactions are much more important than processes and tools -- many organizations require their engineering teams to universally follow the same agile processes."

    As a result, as one interviewee put it, "work items repeatedly had to either be reopened in the following sprint or made ridiculously small and meaningless to fit into a one-week or two-week sprint." In particular, AI projects "require an initial phase of data exploration and experimentation with an unpredictable duration."

    Also: How your business can best exploit AI: Tell your board these 4 things

    RAND's research suggested other factors can limit the success of AI projects. While IT failures have been well documented over the past few decades, AI failures take on an alternative complexion. "AI seems to have different project characteristics, such as costly labor and capital requirements and high algorithm complexity, that make them unlike a traditional information system," the study's co-authors said. 

    "The high-profile nature of AI may increase the desire for stakeholders to better understand what drives the risk of IT projects related to AI."

    The RAND team identified the leading causes of AI project failure:

  • "Industry stakeholders often misunderstand -- or miscommunicate -- what problem needs to be solved using AI. Too often, organizations deploy trained AI models only to discover that the models have optimized the wrong metrics or do not fit into the overall workflow and context." 
  • "Many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model."
  • "The organization focuses more on using the latest and greatest technology than on solving real problems for their intended users."
  • "Organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure."
  • "The technology is applied to problems that are too difficult for AI to solve. AI is not a magic wand that can make any challenging problem disappear; in some cases, even the most advanced AI models cannot automate away a difficult task."
  • While formal agile practices may be too cumbersome for AI development, it's still critical for IT and data professionals to communicate openly with business users. Interviewees in the study recommended that "instead of adopting established software engineering processes -- which often amount to nothing more than fancy to-do lists -- the technical team should communicate frequently with their business partners about the state of the project."

    Also: Time for businesses to move past generative AI hype and find real value

    The report suggested: "Stakeholders don't like it when you say, 'it's taking longer than expected; I'll get back to you in two weeks.' They are curious. Open communication builds trust between the business stakeholders and the technical team and increases the likelihood that the project will ultimately be successful."Therefore, AI developers must ensure technical staff understand the project purpose and domain context: "Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Ensuring effective interactions between the technologists and the business experts can be the difference between success and failure for an AI project."

    The RAND team also recommended choosing "enduring problems". AI projects require time and patience to complete: "Before they begin any AI project, leaders should be prepared to commit each product team to solving a specific problem for at least a year. If an AI project is not worth such a long-term commitment, it most likely is not worth committing to at all."

    Also: When's the right time to invest in AI? 4 ways to help you decide

    While focusing on the business problem and not the technology solution is crucial, organizations must invest in the infrastructure to support AI efforts, suggested the RAND report: "Up-front investments in infrastructure to support data governance and model deployment can substantially reduce the time required to complete AI projects and can increase the volume of high-quality data available to train effective AI models."

    Finally, as noted above, the report suggested AI is not a magic wand and has limitations: "When considering a potential AI project, leaders need to include technical experts to assess the project's feasibility."

    Artificial Intelligence How I used ChatGPT to scan 170k lines of code in seconds and save me hours of detective work Why Claude's Artifacts is the coolest feature I've seen in generative AI so far Midjourney's AI-image generator website is now officially open to everyone - for free 5 free AI tools for school that students, teachers, and parents can use, too




    Comments

    Popular posts from this blog

    ZLUDA v2 Released For Drop-In CUDA On Intel Graphics - Phoronix

    Google chrome crashed and now laptop is running very slowly. Malware? - Virus, Trojan, Spyware, and Malware Removal Help - BleepingComputer

    Google chrome crashed and now laptop is running very slowly. Malware? - Virus, Trojan, Spyware, and Malware Removal Help - BleepingComputer