A revolutionary AI has been created to mimic human actions on a computer, from clicking buttons to typing text. Discover how this innovation could transform the job market and potentially replace humans in daily tasks!
Recently, Anthropic, a company tech specialized in artificial intelligence, made a major announcement that promises to impact the automation and digital productivity market. After months of development, the Claude 3.5 Sonnet, an AI model that can interact directly with applications on your computer.
With this update, Anthropic enters the race for technology capable of automating processes that previously depended exclusively on human intervention.
The new feature brings to light the concept of AI agents, which are automated systems capable of performing various tasks on computers, such as sending emails, browsing the internet and even using complex software.
- CHATGPT'S BIGGEST RIVAL: Billionaire Jeff Bezos' Amazon to invest over $4 billion in new AI to take on OpenAI
- City Hall offers 2.500 places on a free Artificial Intelligence course for young people and adults in partnership with Google Cloud!
- Comgás revolutionizes customer service with cutting-edge technology in artificial intelligence and digital innovation.
- Federal Institute opens 500 vacancies in free online artificial intelligence course and promises to transform professional education
With the support of a new API, called “Computer Use”, Claude 3.5 Sonnet can emulate human actions, such as clicking buttons, moving the cursor and typing text. According to the company, this advancement paves the way for the model to be used for practical day-to-day tasks, enhancing the automation of various back-office activities.
The potential of AI automation
Anthropic's promise is that this new technology can transform, on a large scale, the way we deal with tasks. administrative. The goal is for Claude 3.5 Sonnet to automate repetitive processes such as filling out forms, responding to emails, and navigating enterprise systems.
Although automation tools tasks have been around for years, Anthropic ensures that its solution is more robust and integrated, allowing developers to test and deploy the system on different environments.
Anthropic claims that Claude 3.5 Sonnet is more than just a conventional AI model. It has the ability to interpret what’s happening on a computer screen and make decisions based on that information.
The model was trained to understand screenshots and perform specific actions, such as moving the mouse to a certain point and clicking in the right place, without the need for continuous supervision.
However, this technology still faces challenges. Despite the advances, the model is not infallible. Tests conducted by Anthropic itself show that Sonnet 3.5 has difficulty with tasks such as scrolling pages and executing precision commands.
The company recommends that developers use the model initially on low-risk tasks until additional improvements can be implemented.
Applications and limitations
Companies that already invest in process automation find Claude 3.5 Sonnet to be a promising tool for simplifying daily operations. Several companies, such as Canva and the development platform Replit, have already started testing the solution in their systems.
Replit, for example, has created an autonomous verifier capable of assessing the quality of apps under development, while Canva is studying how AI can assist in the design and content editing process, offering automated support for designers.
However, the technology also raises questions about safety and ethical use. Anthropic has acknowledged that prolonged use of the Claude 3.5 Sonnet could be risky, especially in situations involving sensitive data or critical interactions.
During testing, the model failed to complete complex tasks, such as modifying airline reservations, completing less than half of the attempts successfully. In other words, the Claude 3.5 Sonnet is still far from fully replacing human action in critical tasks.
Cybersecurity concerns have also been raised. Anthropic acknowledges that an AI model with access to desktop applications could be exploited for malicious purposes. This includes the risk of accessing personal information or exploiting vulnerabilities in software.
The company says it has taken preventative measures to mitigate these risks, including limiting access to certain resources and creating classifiers that identify and block potentially harmful actions. dangerous.
Despite its limitations, Anthropic believes the launch of Claude 3.5 Sonnet is an important step toward the future of intelligent automation. The company sees the model as a foundation for learning from mistakes and improving the technology over time.
AI and the corporate market
The AI automation market is booming, with companies from a variety of sectors investing billions of dollars in the field. development of AI agents. According to a recent survey by Capgemini, 10% of global organizations already use AI agents in their operations, and another 82% plan to adopt this tech over the next three years.
Companies like Salesforce and Microsoft are at the forefront of this movement, offering AI solutions to automate workflows and improve efficiency.
OpenAI, one of Anthropic's main competitors, is also developing its own line of AI agents, with the expectation that this technology will advance towards what it calls "super intelligent AI“However, Anthropic is betting that its model stands out for its robustness and self-learning capacity, autonomously correcting errors and adjusting its actions as necessary.
Ethical risks and challenges
While the potential for automation is undeniable, implementing AI agents also poses ethical challenges. A recent study has shown that AI models can be “tricked” into performing specific tasks. harmful, how to obtain personal information illegally.
This raises concerns about how to ensure these tools are used ethically and responsibly, especially in corporate environments where sensitive data is at stake.
Anthropic claims that it is taking all necessary precautions to ensure the safe use of Claude 3.5 Sonnet. In addition to limiting access to critical websites and applications, the company retains screenshots captured by the model for at least 30 days, which allows it to detect any malicious behavior. Despite these measures, Anthropic acknowledges that there are no foolproof guarantees of security.
To address these challenges, Anthropic is collaborating with regulators and safety institutes dedicated to assessing the risk of AI models. The US AI Safety Institute and the UK Safety Institute are among the entities that have tested Claude 3.5 Sonnet ahead of its public release.
The company also assured that it is prepared to continually review its security measures and adjust them as necessary.
The Future of AI with Anthropic
In addition to the launch of the Claude 3.5 Sonnet, Anthropic also announced an updated version of its most affordable model, the Claude 3.5 Haiku.
Haiku, which is expected to be available in the coming weeks, promises the same level of performance as more advanced models, but at a lower cost and with greater speed. This version will initially be released as a text model, with future updates that will allow for image analysis.
With this advancement, Anthropic aims to make its AI solutions more accessible to small and medium-sized businesses, which can benefit from automation without the high cost of traditional tools. The goal is to democratize access to cutting-edge technology and facilitate the integration of AI into different areas of the market.
In short, Anthropic is positioning itself as a leader in the digital automation revolution. The company believes its AI innovations have the potential to transform the economy and the way businesses operate, delivering smarter, more efficient solutions.