Amazon shoppers may want to think twice before dipping their toes into the Nile



Amazon shoppers may want to think twice before dipping their toes into the Nile








A leak at Business Insider revealed that Amazon is working on a new AI search engine called Project Nile, which aims to change the way we shop online by generating more sales from people who shopping.


The fact that a company called Amazon is launching an internal project called the Nile, when we talk about the two longest rivers on the planet, gives us an idea of ​​the scale of this project this function is required to change the user path. manage email. a large catalog of products from super markets by improving the user experience.


The idea, of former Microsoft CEO Joseph Sirosh, who joined Amazon in October 2022, consists of creating a conversational system that can work as an assistant for users during the search process, interacting like a chat bot to use try to understand exactly what they are. looking for. For. Given the current chaos, in which Amazon often offers products that have nothing to do with what you are looking for, based on advertising rather than what we are looking for, it may seem that everything will improve.


There is a problem, however: any trust between Amazon and its users is long gone. Let me explain: if you use Amazon and some times, you will know that the company that planned to be paid to bring the user first and improve their satisfaction to keep them now is a predator that sold you to suppliers who pay to take the lead. their products. It won't say sorry, we don't have that but will return a suspicious number of similar products - or sometimes not even that. 


Seen in this light, the idea of ​​trusting the seller, it seems to be absurd: there is a fine line between giving you what you really want and putting yourself in 'try to convince yourself that what you want is something other than the company. interested in selling. We are talking about companies that developed algorithms like…






Amazon pays $4 billion for Anthropic's Claude, a chat bot platform that rivals Google's ChatGPT and Bard


Amazon announced Monday that it will invest up to $4 billion in Anthropic, the company that built the powerful Claude chat bot. Claude has established himself as one of the main competitors against ChatGPT from Open AI and Bard from Google, in the race for AI dominance.



Claude has not had much support so far, but it is clear that he needs it because of the high costs it costs to remain competitive in the construction of the large language technology (LLM) that drives these ships.The investment is part of a larger partnership announced by the two companies,with Anthropic agreeing to use Amazon's cloud computing services for essential services in exchange for the investment. 


The support marks Amazon's first tie up with a chat giant, at a time when Microsoft and Google have already placed big bets on their cloud computing platforms. In fact, Amazon's investment contradicts the statements made in recent months such as that it wants to be an atheist about the LLM institute. Although it is possible that Amazon always wants to take a big step forward and adopt an agnostic point of view to show that it is slow in making such a bet. In addition, it is true that Amazon continues to have many horses in the LLM race, so this money can work more to change its efforts, but also to guarantee access to technology, skills and knowledge. With last week's announcement of the Alexa LLM, 


Amazon is entering the race for a closed business model alongside its business of providing product management systems (called Bedrock). Microsoft has invested more than $10 billion in Open AI to acquire exclusive rights to provide OpenAI chat bot technology within its own cloud services.This means that OpenAI is ahead of Microsoft's Azure cloud. Meanwhile, 


Google released its own Bard chat bot and Meta invested in its Llama platform, which it made open source, so other companies could use Llama's core LLM technology.(Although Google invested $300 million in Anthropic in February and Anthropic chose Google Cloud Platform as its preferred cloud at the time). This is the main support entry for Claude, when the money is very important to support the expensive work of the LLM competition training, 


Which uses hundreds of billions of parameters and requires a lot of computational needs. Claude raised only $2.7 billion to date. Amazon and Anthropic said the new strategic partnership will combine their technology and expertise in secure AI and make Anthropic's development of LLM cores available to AWS customers. One of Anthropic's main selling points is that 


AI can be dangerous if proper security is not used. Anthropic has invested heavily in ensuring that its Claude chat bot foundation model follows a set of principles in generating results, rooted in the principles of what it calls Constitutional AI. Anthropic has tried here to open up an understanding gap compared to Open AI's GPT Chapter, which is not very strong. 


Anthropic will use AWS Trainium and Inferentia to build, train and deploy its future core models, taking advantage of the cost, efficiency, scalability and security of AWS. The two companies will work together on the future development of Trainium and Inferentia technology. AWS will be Anthropic's primary cloud provider for mission-critical workloads, including security research and future core model development. 


Anthropic plans to run most of its operations on AWS, providing Anthropic with advanced technology from the world's leading cloud providers. Anthropic has a long term commitment to provide AWS customers around the world with access to the next generation of its foundation models through Amazon Bedrock, an AWS managed service that provides security and the industry's best foundation models. In addition, 


Anthropic will provide AWS customers with early access to unique features for improved versioning and maintenance capabilities. 


  • Amazon will invest up to $4 billion in Anthropic and retain a minority stake in the company.



Amazon designers and engineers will be able to use the Anthropic model to build from Amazon Bedrock to integrate AI-powered capabilities into their operations, improve existing applications, and create new customer experiences across businesses. Amazon.


Venture Beat recently published an article comparing the new Claude pro version to the pro version of Chat GPT, one thing that is clear is Claude's ability to collect content at a higher level, thanks to his 100,000 token popup.





In the field, AWS continues to offer examples of NVIDIA and AWS custom silicon chips, AWS Trainium for AI training and AWS Inferentia for AI integration, the company said. Basically, AWS strives to provide its customers with the largest selection of base templates from multiple vendors, while customers can customize these templates, maintaining privacy and security.of their own data, and integrate with others of their own. AWS works. 


All of this is provided by AWS's new service, Amazon Bedrock. Today's announcement falls in the middle of this, the company said, by providing market access to Anthropic's design models and allow them to use their own data to create their own models and use the design capabilities effectively through personal services and Amazon Base. Finally, at the next level, 


AWS offers tools and AI services that create things for customers like Amazon Code Whisperer, a strong AI code supporter that supports code snippets directly in the collection code reports, thereby speeding up the productivity of developers when they write code, in the company............



Post a Comment

0 Comments