The well-known no-code platform Wix has released an AI website builder that is intended to convert a user’s business plan into a unique website. With the ability to fine-tune design, text, and layouts, users of the AI-driven tool may quickly produce a distinctive, business-ready website.
The conversational interface of Wix’s AI website builder, which allows customers to “chat” with the AI to create their website, was highlighted in the release. The user’s input is then used by the platform’s AI to create a website, which can be adjusted until it satisfies the user’s unique needs.
Users can start operating right away using the AI website builder’s integrated business solutions, which include online store, event management, and scheduling. It also has features that are shared by a lot of website builders, like customer service, hosting, optimization for performance, adding custom domains, SEO and analytics tools, and mobile optimizations.
Although Wix’s AI website builder is free to use, there are premium options available as well, with monthly prices starting at $17. With these subscriptions, customers can link their own domains, make online payments, and get access to extra features.
Microsoft is beta testing a new feature for Copilot that lets the chatbot driven by AI analyze and summarize files. Users may now share files with the chatbot with ease thanks to the additional capabilities.
The latest version of Copilot, first noticed by well-known hacker Leopeva64 on X, has a new “Add a file” option that lets users transfer files straight from their local computers. The chatbot may then read the contents of the file and respond to questions based on that information.
In addition to providing summaries of data, the AI-driven assistant may assist you in locating a particular piece of information and even request additional details about the subject. People have been utilizing chatbots to summarize and comprehend vast amounts of data without having to manually go through it ever before OpenAI introduced ChatGPT.
This is quite helpful for folks handling a lot of data, but it might not be helpful for those who don’t give the chatbot documents. It appears that the Canary version of Microsoft Edge has the capability to analyze and summarize data using Copilot; however, there is no word on when this feature will be made available to all users.
Since the release of Copilot last year, Microsoft has been continuously developing and testing new features for it. The tech behemoth revealed that Copilot, which turned one earlier this month, can now modify AI graphics created using Designer.
Cristiano Amon, the CEO of Qualcomm (QCOM), touted the chip giant’s AI initiatives on a virtual stage during his company’s annual shareholder meeting on Tuesday. Millions of Android-powered smartphones worldwide run Qualcomm’s processors, with the company’s most recent model, the Snapdragon 8 Gen 3, designed to power generative AI programs that run on the device.
In his prepared remarks, Amon stated, “We are bringing Gen AI capabilities to smartphone users worldwide.” “With our latest Snapdragon mobile platforms, which now feature significantly enhanced AI processing performance, we continue to be leaders in premium- and high-tier Android devices,” he continued.
Samsung’s most recent Galaxy S24 smartphone range includes the Snapdragon 8 Gen 3 processor, which supports some of the generative AI features on board. One such feature is Samsung’s Generative Edit, which lets you swiftly alter or eliminate things in your photos.
In the AI PC market, the company also intends to compete with Intel (INTC), AMD (AMD), and Nvidia (NVDA). The goal shared by all four semiconductor companies is to become the leading provider of generative AI on-device apps for laptops and desktop computers. High-end laptops from companies like HP, Dell, and others are already equipped with Intel’s Core Ultra CPUs, while Nvidia claims that machines running its graphics chips qualify as AI PCs.
In the upcoming months, Qualcomm will launch the Snapdragon X Elite CPU, but the company has already stated that the processor is faster than Intel’s AI workloads.
Amon stated, “The Snapdragon X Elite features industry-leading AI performance and significant improvements in performance and battery life. It is our first implementation of the custom Qualcomm Oryon CPU.” “It is ready to become the industry standard for Copilot and Gen AI on-device experiences.”
Most generative AI experiences—such as ChatGPT, Google’s Gemini platform, and Microsoft’s multiple Copilots—are still handled in the cloud. However, the theory holds that on-device processing will take precedence over other methods of interfacing with the technology as generative AI models grow more specialized for particular activities.
Additionally, chip companies claim that because you won’t be sending your data over the internet, on-device generative AI apps will be more secure. However, the common user doesn’t currently have many onboard generative AI apps available.
At a time when the markets for both smartphones and PCs are attempting to resume steady growth after a significant slowdown following a significant spike in sales in the early stages of the epidemic, generative AI is considered as a viable way to enhance sales of these devices.
Qualcomm’s stock hasn’t responded as dramatically to the generative AI boom, despite Nvidia and AMD’s shares benefiting greatly from it. Although the company’s share price has increased by 33% in the past year, it is still far short of AMD’s 107% and Nvidia’s astounding 226% gains. During the same period, Intel’s shares rose by 48%, outperforming those of Qualcomm.
Nevertheless, the market for generative AI on devices is still in its infancy, and given its reach, Qualcomm may find itself in a particularly advantageous position should apps start to gain traction.
As part of an intended move to establish a suite of products, cryptocurrency trading firm DWF Labs plans to buy $10 million of TokenFi’s TOKEN over a two-year period, TokenFi creator “B” informed CoinDesk in a Telegram chat on Tuesday.
TokenFi’s treasury will be used to buy the tokens. This will guarantee that the Treasury has the funds necessary to create new artificial intelligence (AI) products, such the smart contract auditor and the TokenFi Generative AI for Non-fungible Tokens (NFTs).
Following the release of the DWF Labs news, TOKEN saw a 50% increase to almost 9 cents, surpassing its record high. In the meantime, the CoinDesk 20 index (CD20) has increased by more than 5%.
“TokenFi will benefit from this as the tokenization and artificial intelligence wave gains traction leading into the most explosive bull run in cryptocurrency history,” B wrote in the message.
Through its web interface, TokenFi allows users to launch or tokenize assets. It began as a sibling project of Floki in 2023, a meme coin with a dog motif that later transformed into a decentralized finance platform and metaverse.
Anthropic unveiled Claude 3, a set of three AI language models that are comparable to ChatGPT, on Monday. According to Anthropic, the models achieved new industry standards in a variety of cognitive tests, sometimes even coming close to “near-human” competence. It can be purchased right now via Anthropic’s website; the most potent model is subscription-only. It is also accessible to developers using an API.
The three models of Claude 3—Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus—represent escalating levels of complexity and parameter count. With an email sign-in, Sonnet powers the Claude.ai chatbot for free right now. However, as previously indicated, you can only use Opus via Anthropic’s online chat interface if you subscribe to “Claude Pro,” a $20 monthly service that is available through the Anthropic website. A context window with 200,000 tokens is present in all three. (The amount of tokens, or word fragments, that an AI language model can process simultaneously is known as the context window.)
We wrote about the March 2023 launch of Claude and the July 2023 introduction of Claude 2. Every time, Anthropic outperformed OpenAI’s top models in terms of context window length but lagged significantly behind them in terms of competence. There is currently no agreement among experts over whether Anthropic has outperformed OpenAI’s disclosed models in terms of performance with Claude 3, and the presentation of AI benchmarks is infamously prone to cherry-picking.
It has been observed that Claude 3 performs at an advanced level on a variety of cognitive activities, such as fluency in language, mathematics, reasoning, and expert knowledge. (The AI research community typically employs the terms “know” and “reason,” despite disagreements about whether big language models “know” or “reason.”) According to the business, the most advanced model, Opus, demonstrates “near-human levels of comprehension and fluency on complex tasks.”
That is a really strong assertion that needs closer examination. Opus may perhaps be “near-human” on certain particular measures, but this does not imply that Opus possesses the same level of general intellect as humans (remember that pocket calculators are superhuman in math). Thus, it’s an intentionally striking assertion that may be qualified down.
Anthropic claims that on ten AI benchmarks, including as MMLU (undergraduate level knowledge), GSM8K (grade school math), HumanEval (coding), and the amusingly called HellaSwag (common knowledge), Claude 3 Opus outperforms GPT-4. Some of the victories are close, like 86.8 percent for Opus against 86.4 percent on a five-shot trial of MMLU, and some are significant, like 84.9 percent on HumanEval over 67.0 percent on GPT-4. It’s hard to say, though, exactly what it may mean to you as a customer.
“LLM benchmarks should be viewed with a degree of caution as always,” argues AI researcher Simon Willison in an interview with Ars regarding Claude 3. The model’s performance on benchmarks provides little insight into how the model ‘feels’ to use. However, no other model has outperformed GPT-4 on a variety of commonly used benchmarks, so this is still a big accomplishment.”
In domains including analysis, forecasting, content creation, code generation, and multilingual discussion, Claude 3 models outperform Claude 2 models. According to reports, the models also have improved vision capabilities that let them comprehend visual forms like diagrams, charts, and images. These features are comparable to Google’s Gemini and GPT-4V, which is available in ChatGPT subscription versions.
Comparing the three models to rival models and earlier generations, Anthropic highlights how much faster and more affordable they are. The three models are as follows: Opus, the largest, has $15 million input tokens and $75 million output tokens; Sonnet, the medium model, has $3 million input tokens and $15 million output tokens; and Haiku, the smallest and fastest, has $0.25 million input tokens and $1.25 million output tokens. By contrast, $10 for every million input tokens and $30 for every million output tokens are offered by OpenAI’s GPT-4 Turbo via API. $0.50 per million input tokens and $1.50 per million output tokens make up GPT-3.5 Turbo.
Willison stated he hadn’t quite gotten a sense for Claude 3’s performance when we asked, but he had noticed the API cost for each model right away. “The unreleased cheapest one looks radically competitive,” Willison remarks. “The best quality one is super expensive.”
Anthropic’s new Constitutional AI chatbot technique gives AI “values.”
Other unrelated information: Anthropic reports that the Opus model surpassed 99 percent accuracy in a benchmark test, and the Claude 3 versions are said to be able to manage up to 1 million tokens for a limited number of users (much like the Gemini Pro 1.5). Additionally, according to the business, the Claude 3 models exhibit improved accuracy and a decrease in incorrect answers, and they are less likely to reject innocuous cues.
Anthropic obtained Claude 3’s capability gains, according to a model card made available with the models, in part by using synthetic data throughout the training phase. The term “synthetic data” refers to data created in-house using a different AI language model. This method can be used to increase the training dataset’s depth by include scenarios that may be absent from a scraped dataset. According to Willison, “the synthetic data thing is a big deal.”
In the upcoming months, Anthropic intends to issue regular updates for the Claude 3 model family, which will include additional features like tool usage, interactive coding, and “advanced agentic capabilities.” The company states that the Claude 3 models “present negligible potential for catastrophic risk at this time” and that it is still committed to making sure that safety precautions stay up with improvements in AI capability.
Through Anthropic’s API, the Opus and Sonnet models are currently accessible, and Haiku will be released shortly. Sonnet can also be accessed in private preview on Google Cloud’s Vertex AI Model Garden and through Amazon Bedrock.
We registered with Claude Pro in order to test Opus informally and see for ourselves. Opus feels like ChatGPT-4 in terms of capability. It’s not very good at writing original dad jokes—all of them seem to have been lifted from the internet—but it does fairly well at summarizing information and creating text in different styles. It also performs fairly well when it comes to logically analyzing word problems, and—while confabulation rates seem low—we did see a few slip in when asked about more obscure topics.
That can be annoying in a world where computer goods usually produce hard numbers and definable benchmarks, since none of that is a clear pass or fail. Willison informed us that this was “yet another case of ‘vibes’ as a key concept in modern AI.”
Because the performance of any AI assistant varies greatly depending on the prompts used and the conditioning of the underlying AI model, AI benchmarks are challenging to employ. While AI models can function effectively “on the test,” they are unable to transfer such abilities to new scenarios.
Furthermore, Willison’s “vibes” stem from the fact that the efficacy of AI assistants is quite individualized. This is due to the fact that having an AI model accomplish your desired goal might be challenging to measure (e.g., using a benchmark metric) since the task you assign it can be any task in any intellectual subject known to man. Depending on the task and the prompting style, different models may perform effectively for different people.
This applies to all large language models, not just Claude 3, from suppliers like Google, OpenAI, and Meta. People have discovered over time that every model has unique characteristics and that, with the right prompting tactics, it is possible to either embrace or overcome the strengths and flaws of any model. It appears that the main AI assistants are currently settling into a set of remarkably similar features.
The upshot of all of it is that one should go with caution—or a dose of vibes—when Anthropic claims that Claude 3 can exceed GPT-4 Turbo, which is still regarded as the industry leader in terms of general capability and low hallucinations. Since nobody else can probably duplicate the precise set of conditions under which you would use a model, it is crucial that you personally test each model if you are evaluating several models to make sure it suits your application.
Lex Machina is a prominent player in the realm of legal analytics, offering a suite of tools powered by artificial intelligence (AI) to empower legal professionals. Originally established as a company focused on intellectual property (IP) litigation research, it has evolved into a comprehensive solution, now residing under the umbrella of LexisNexis.
Here’s a deeper dive into Lex Machina and its functionalities:
Lex Machina leverages the power of AI, specifically natural language processing (NLP) and machine learning (ML), to extract valuable insights from vast troves of legal data. This data encompasses millions of court documents, filings, and case information, meticulously collected and organized by Lex Machina.
Through its AI-powered engine, Lex Machina accomplishes the following:
Lex Machina boasts several unique features that distinguish it from competitors:
Lex Machina stands out as a powerful AI-driven toolset, empowering legal professionals with valuable insights and knowledge. By leveraging its comprehensive data analysis and user-friendly features, lawyers can gain a strategic edge in their practice, navigate the complexities of the legal landscape more effectively, and ultimately achieve better outcomes for their clients.
Westlaw Edge is a legal research platform developed by Thomson Reuters that leverages artificial intelligence (AI) to enhance the efficiency and effectiveness of legal research. It caters to lawyers, legal professionals, and organizations of all sizes, aiming to streamline research tasks while maintaining accuracy.
Here’s a deeper dive into what Westlaw Edge offers:
Core Features:
Benefits for Legal Professionals:
Additional Considerations:
Exploring AI Tools:
While Westlaw Edge is a prominent example, it’s just one of many AI-powered legal research tools available. As you explore further, consider factors like your specific needs, budget, and desired features.
Here are some additional avenues to explore:
Remember, choosing the right AI tool depends on your specific needs and preferences. By conducting thorough research and exploring various options, you can find the perfect solution to streamline your legal research and stay ahead in the ever-evolving legal landscape.
If you’re looking for an AI-powered tool to streamline your legal work, LinkSquares might be worth exploring. It’s a contract lifecycle management (CLM) software specifically designed for legal teams, offering an all-in-one platform to manage contracts from creation to execution and beyond.
AI-powered Contract Management:
Streamlined Contract Lifecycle:
Additional Features:
LinkSquares is primarily targeted towards in-house legal teams and corporate counsel who manage a high volume of contracts. It’s particularly beneficial for organizations with repositories exceeding 2,000 contracts.
Benefits of using LinkSquares:
Overall, LinkSquares is a powerful AI-powered CLM platform that can significantly improve the efficiency and accuracy of legal work for in-house legal teams. If you’re looking for a way to streamline your contract management processes and leverage the power of AI, LinkSquares might be worth considering.