AWS releases new product to increase investment in generative AI training

16-022-supersoniccontract

AWS recently announced several new tools for training and deploying generative AI on the cloud platform, extending its reach further into the field of AI software development.

AWS published a post on the AWS Machine Learning blog detailing these new offerings, which are large-scale, pre-trained language models that lay the foundation for targeted natural language processing tasks. The base models are typically trained on large amounts of text data using deep learning techniques, enabling them to learn to understand human language and generate text.

Using pre-trained base models can save developers significant time and resources, such as OpenAI's GPT, for text generation, sentiment analysis, and language translation.

     Multiple Large-Scale Language Modeling Options

This new service from Bedrock can provide base models from a variety of sources through the application interface, including Jurassic-2 multilingual large language models from AI21 Labs, which can generate text in multiple languages, and Claude LLM from Anthropic PBC, which is based on conversational processing tasks for training principled, responsible AI systems and text-processing tasks. Users can also access Stability AI as well as Amazon LLM using APIs.

Swami Sivasubramanian, vice president of database, analytics and machine learning at AWS, writes that the base model is pre-trained at Internet scale, so it can be customized with relatively little additional training. He cites the example of a fashion retailer's content marketing manager who can feed Bedrock as few as 20 examples of "great taglines from past campaigns, relevant product descriptions, and Bedrock can automatically start generating effective social media content, display ads and web copy for new handbag products. "

In addition to Bedrock, AWS has introduced two new Titan large language models. The first is a generative large language model for summarization, text generation, classification, open-ended Q&A, and information extraction; the second large language model, for inputting text and converting it to a digital representation containing text semantics, helps generate contextual responses that are not limited to word matching.

It is worth noting that OpenAI (in which Microsoft is a major investor) was not mentioned in this announcement, but given the market demand for large language models, this should not be an obstacle for Amazon.

Rajesh Kandaswamy, a leading Gartner analyst and researcher, said, "There is a rush to create a lot of technology, and at this stage, almost everything you see has multiple options from multiple innovative companies."

AWS is behind Microsoft and Google in launching its own large language model, but that won't be a competitive barrier, Kandaswamy said, "I don't think anyone is so far behind that they have to catch up, and it looks like there's a big race going on in the market, but the customers we talk to, other than very early adopters, don't know what to how to do it."

      Hardware enhancements

AWS has also enhanced the hardware used to deliver training and inference on the AWS cloud. The new network-optimized EC2 Trn1n instance, which combines AWS' proprietary Trainium and Inferentia2 processors, now delivers 1,600 GB of network bandwidth per second, a roughly 20 percent performance boost.AWS' Inf2 instance, which uses Inferentia2 for reasoning about large generative AI applications with models containing hundreds of billions of parameter models, the instance is now available.

AWS also introduced CodeWhisperer, an AI coding tool that uses a base model to generate code suggestions in real time based on natural language comments and historical code in integrated development environments. The tool is available for Python, Java, JavaScript, TypeScript C# and 10 other languages and can be accessed through various IDEs.

Developers can simply tell CodeWhisperer to perform a task such as 'parse a CSV song string' and ask it to return a structured based on values such as artist, title and top chart rank," writes Sivasubramanian. CodeWhisperer generates "a complete function to parse the string and return the specified list." Developers using the preview version report a 57 percent speedup and a 27 percent improvement in success rate compared to the situation without the tool, he said.

The field of large language models is likely to remain fragmented and chaotic for the near future, as many players try to profit from the success of proof-of-concepts such as ChatGPT. Kandaswamy said it is unlikely that any one model will dominate the market as Google's natural language APIs have done in the speech recognition space.

Just because one model is good at one thing doesn't mean it will be good at everything, he said, "It's possible that in the next two or three years everyone will be offering other people's models and there will be more hybrid and cross-technology relationships in the future."