Tag Archives: Big7
You can now generate AI images directly in the Google Search bar
Back in the olden days of last December, we had to go to specialized websites to have our natural language prompts transformed into generated AI art, but no longer! Google announced Thursday that users who have opted-in for its Search Generative Experience (SGE) will be able to create AI images directly from the standard Search bar.
SGE is Google’s vision for our web searching future. Rather than picking websites from a returned list, the system will synthesize a (reasonably) coherent response to the user’s natural language prompt using the same data that the list’s links led to. Thursday’s updates are a natural expansion of that experience, simply returning generated images (using the company’s Imagen text-to-picture AI) instead of generated text. Users type in a description of what they’re looking for (a Capybara cooking breakfast, in Google’s example) and, within moments, the engine will create four alternatives to pick from and refine further. Users will also be able to export their generated images to Drive or download them. — Read More
Opt In & Try It
Assistant with Bard: A step toward a more personal assistant
Assistant with Bard combines Assistant’s capabilities with generative AI to help you stay on top of what’s most important, right from your phone.
…Today at Made by Google, we introduced Assistant with Bard, a personal assistant powered by generative AI. It combines Bard’s generative and reasoning capabilities with Assistant’s personalized help. You can interact with it through text, voice or images — and it can even help take actions for you. In the coming months, you’ll be able to access it on Android and iOS mobile devices. — Read More
Microsoft Bing to gain more personalized answers, support for DALLE-E 3 and watermarked AI images
Microsoft’s Bing is gaining a number of AI improvements, including support for OpenAI’s new DALLE-E 3 model, more personalized answers in search and chat, and tools that will watermark images as being AI-generated. The company announced these and other Windows and Bing news at an event this week in New York, where it also introduced new Surface devices that include built-in AI experiences. — Read More
Google expects no change in its relationship with AI chip supplier Broadcom
Alibaba opens AI model Tongyi Qianwen to the public
Alibaba said on Wednesday it would open its artificial intelligence model Tongyi Qianwen to the public, in a sign it has gained Chinese regulatory approval to mass-market the model.
Authorities in China have recently accelerated efforts to support companies developing AI as the technology increasingly becomes a focus of competition with the United States. — Read More
Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and … Did We Mention AI?
EARLIER THIS MONTH, Sundar Pichai was struggling to write a letter to Alphabet’s 180,000 employees. The 51-year-old CEO wanted to laud Google on its 25th birthday, which could have been easy enough. Alphabet’s stock market value was around $1.7 trillion. Its vast cloud-computing operation had turned its first profit. Its self-driving cars were ferrying people around San Francisco. And then there was the usual stuff—Google Search still dominated the field, as it had for every minute of this century. The company sucks up almost 40 percent of all global digital advertising revenue.
But not all was well on Alphabet’s vast Mountain View campus. The US government was about to put Google on trial for abusing its monopoly in search. And the comity that once pervaded Google’s workforce was frayed. Some high-profile employees had left, complaining that the company moved too slowly. Perhaps most troubling, Google—a long-standing world leader in artificial intelligence—had been rudely upstaged by an upstart outsider, OpenAI. Google’s longtime rival Microsoft had beaten it to the punch with a large language model built into its also-ran search engine Bing, causing panic in Mountain View. Microsoft CEO Satya Nadella boasted, “I want people to know we made Google dance.” — Read More
LLMs and Tool Use
Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date.
The Microsoft project aims to teach AI how to use any and all digital tools in one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty good job of recommending pizza toppings to you if you describe your dietary preferences and can draft dialog that you could use when you call the restaurant. But most AI tools can’t place the order, not even online. In contrast, Google’s seven-year-old Assistant tool can synthesize a voice on the telephone and fill out an online order form, but it can’t pick a restaurant or guess your order. By combining these capabilities, though, a tool-using AI could do it all. An LLM with access to your past conversations and tools like calorie calculators, a restaurant menu database, and your digital payment wallet could feasibly judge that you are trying to lose weight and want a low-calorie option, find the nearest restaurant with toppings you like, and place the delivery order. If it has access to your payment history, it could even guess at how generously you usually tip. If it has access to the sensors on your smartwatch or fitness tracker, it might be able to sense when your blood sugar is low and order the pie before you even realize you’re hungry.
Perhaps the most compelling potential applications of tool use are those that give AIs the ability to improve themselves. — Read More
Artist-created images and animations about artificial intelligence (AI) made freely available online
What does artificial intelligence (AI) look like? Searching online, the answer is likely streams of code, glowing blue brains or white robots with men in suits.
… Since launching, Visualising AI has commissioned 13 artists to create more than 100 artworks, gaining over 100 million views, 800,000 downloads, and our imagery has been used by media outlets, research and civil society organisations. — Read More
View images on Unsplash
View videos on Pexels
Introducing SeamlessM4T, a Multimodal AI Model for Speech and Text Translations
The world we live in has never been more interconnected, giving people access to more multilingual content than ever before. This also makes the ability to communicate and understand information in any language increasingly important.
Today, we’re introducing SeamlessM4T, the first all-in-one multimodal and multilingual AI translation model that allows people to communicate effortlessly through speech and text across different languages. — Read More