Choosing the Right Development Strategy for Your Android and iOS ...



internal app store :: Article Creator

OpenAI Plans App Store For AI Software, The Information Reports

June 20 (Reuters) - OpenAI, creator of the widely popular chatbot ChatGPT, plans to launch a marketplace that will allow developers to sell their AI models built on top of its own AI technology, news site the Information reported on Tuesday, citing people with knowledge of discussions at the company.

Enterprise customers using ChatGPT often tailor the technology to their specific uses, which range from identifying financial fraud from online transaction data to answering questions about specific markets based on internal documents. According to the news report, makers of such models could offer them to other businesses through OpenAI's proposed marketplace.

OpenAI CEO Sam Altman disclosed the potential plans during a meeting with developers in London last month, the report said.

Such a marketplace could compete with app stores run by some of the company's customers and technology partners - including Salesforce (CRM.N) and Microsoft (MSFT.O) - and help OpenAI's technology reach a broader customer base.

OpenAI did not immediately respond to a Reuters request for comment.

The Information also reported that two of the company's customers, Aquant, which makes software that manufacturers use to guide customers through device maintenance and repairs, and education app maker Khan Academy, might be interested in offering their ChatGPT-powered AI models on OpenAI's marketplace.

Since its release late last year, hundreds of businesses have adopted ChatGPT to automate tasks and increase efficiency. Companies are also racing to offer their customers new tools and capabilities based on the AI software's advanced large language models (LLMs).

Reporting by Yuvraj Malik in Bengaluru; Editing by Pooja Desai

Our Standards: The Thomson Reuters Trust Principles.


Rivian Hires Tesla, Meta And Apple Veteran To Lead Communications

Sarah O'Brien had a direct hand in the launch of the Tesla Model 3 and the Apple Watch

Rivian just picked up Sarah O'Brien, a new communications executive from Meta to lead its own coms department. She'll take over the role of Chief Communications Officer. Her history includes time at Tesla and Apple and her future looks like one that's focused on shining a light on Rivian's latest innovations.

O'Brien began working with Tesla in 2016 after more than eight years at Apple. During her time at the tech company, she had a hand in communications surrounding the Apple Watch, the iPhone, iPad, App Store, and the iTunes Music Festival. When she moved over to Tesla her focus included products like the Model 3, Solar Roof, and the Semi Truck according to a statement from Rivian.

She moved on from Tesla in mid-2018 and has been at Meta ever since. At Rivian, she'll oversee product, consumer, internal, and corporate communications. She'll report directly to CEO RJ Scaringe and released a statement about her vision of Rivian and its future in correlation with the announcement.

Read: Tesla Appears To Be Preparing A Third-Party App Store

 Rivian Hires Tesla, Meta And Apple Veteran to Lead Communications

"Rivian is still in the early chapters of its incredible story, and I'm thrilled to be able to play a role in telling the next chapters," said O'Brien. "Rivian is delivering on an ambitious mission, not only to create electric vehicles that redefine the ownership experience but to provide its customers with real ways to get carbon out of transportation and drive real impact. By helping to tell Rivian's stories, we hope to inspire people from across the world to explore responsibly and preserve our natural world," she continued.

While O'Brien might not have a direct hand in production, design, or planning, her influence shouldn't be overlooked. The way she presents Rivian to customers, investors, employees, and other companies will be vital to the brand's success. As of this writing, the company's stock is down more than 80 percent from where it was at the start of 2022.

advertisement scroll to continue

It's also about to take on the next big step of growth in terms of launching two new products while maintaining production, interest, and sales of its R1T and R1S.

 Rivian Hires Tesla, Meta And Apple Veteran to Lead Communications

How Large Language Models (LLM) Will Power The Apps Of The Future

Generative AI and particularly the language-flavor of it – ChatGPT is everywhere. Large Language Model (LLM) technology will play a significant role in the development of future applications. LLMs are very good at understanding language because of the extensive pre-training that has been done for foundation models on trillions of lines of public domain text, including code. Methods like supervised fine-tuning and reinforced learning with human feedback (RLHF) make these LLM even more efficient in answering specific questions and conversing with users. As we get into next phase of AI apps

LLM calls:

These are direct calls to completion or chat models by a LLM provider like Azure OpenAI or Google PaLM or Amazon Bedrock. These calls have a very basic prompt and mostly use the internal memory of the LLM to produce the output.

Example: Asking a basic model like "text-davinci" to "tell a joke". You give very little context and model relies on its internal pre-trained memory to come up with an answer (highlighted in green in figure below – using Azure OpenAI).

Prompts:

Next level of intelligence is in adding more and more context into prompts. There are techniques for prompt engineering that can be applied to LLMs that can make them give customized responses. For example, when generating an email to a user, some context about the user, past purchases and behavior patterns can serve as prompt to better customize the email. Users familiar with ChatGPT will know different methods of prompting like giving examples which are used by the LLM to build response. Prompts augment the internal memory of the LLM with additional context. Example is below.

Embeddings:

Embeddings take prompts to the next level by searching a knowledge store for context and obtaining that context and appending to the prompt. Here, the first step is to make a large document store with unstructured text searchable by indexing the text and populating a vector database. For this an embedding model like 'ada' by OpenAI is used that takes a chunk of text and converts it into a n-dimensional vector. These embeddings capture the context of the text, so similar sentences will have embeddings that are close to each other in vector space. When user enters a query, that query is also converted into embedding and that vector is matched against vectors in database. Thus, we get top 5 or 10 matching text chunks for the query which form the context. The query and context are passed to LLM to answer the question in a human-like manner.

Chains:

Today Chains is the most advanced and mature technology available that is extensively being used to build LLM applications. Chains are deterministic where a sequence of LLM calls are joined together with output from one flowing into one of more LLMs. For example, we could have a LLM call query a SQL database and get list of customer emails and send that list to another LLM that will generate personalized emails to Customers. These LLM chains can be integrated in existing application flows to generate more valuable outcomes. Using chains, we could augment LLM calls with external inputs like API calls and integration with knowledge graphs to provide context. Moreover, today with multiple LLM providers available like OpenAI, AWS Bedrock, Google PaLM, MosaicML, etc. We could mix and match LLM calls into chains. For chain elements with limited intelligence a lower LLM like 'gpt3.5-turbo' could be used while for more advanced tasks 'gpt4' could be used. Chains give an abstraction for data, applications and LLM calls.

Agents:

Agents is a topic of many online debates particularly with respect to being artificial general intelligence (AGI). Agents use an advanced LLM like 'gpt4' or 'PaLM2' to plan tasks rather than having pre-defined chains. So now when there are user requests, based on query the agent decides what set of tasks to call and dynamically builds a chain. For example, when we configure an agent with a command like "notify customers when loan APR changes due to government regulation update". The agent framework makes a LLM call to decide on the steps to take or chains to build. Here it will involve invoking an app that scrapes regulatory websites and extracts latest APR rate, then a LLM call searches database and extracts customer emails which are affected and finally an email is generated to notify everyone.

Final Thoughts

LLM is a highly evolving technology and better models and applications are being launched every week. LLM to Agents is the intelligence ladder and as we move up, we build complex autonomous applications. Better models will mean more effective agents and the next-gen applications will be




Comments

Popular posts from this blog

ZLUDA v2 Released For Drop-In CUDA On Intel Graphics - Phoronix

Google chrome crashed and now laptop is running very slowly. Malware? - Virus, Trojan, Spyware, and Malware Removal Help - BleepingComputer

Google chrome crashed and now laptop is running very slowly. Malware? - Virus, Trojan, Spyware, and Malware Removal Help - BleepingComputer