Last October, a research paper published by a Google data scientist, the CTO of Databricks Matei Zaharia and UC Berkeley professor Pieter Abbeel posited a way to allow GenAI models — i.e. models along the lines of OpenAI’s GPT-4 and ChatGPT — to ingest far more data than was previously possible. In the study, the co-authors demonstrated that, by removing a major memory bottleneck for AI models, they could enable models to process millions of words as opposed to hundreds of thousands — the maximum of the most capable models at the time.
AI research moves fast, it seems.
Today, Google announced the release of Gemini 1.5 Pro, the newest member of its Gemini family of GenAI models. Designed to be a drop-in replacement for Gemini 1.0 Pro (which formerly went by “Gemini Pro 1.0” for reasons known only to Google’s labyrinthine marketing arm), Gemini 1.5 Pro is improved in a number of areas compared with its predecessor, perhaps most significantly in the amount of data that it can process. — Read More
Daily Archives: February 17, 2024
US researchers develop ‘unhackable’ computer chip that works on light
Researchers at the University of Pennsylvania have developed a new computer chip that uses light instead of electricity. This could improve the training of artificial intelligence (AI) models by improving the speed of data transfer and, more efficiently, reducing the amount of electricity consumed.
… A team led by Nader Enghata, a professor at the School of Engineering and Applied Science at the University of Pennsylvania, has designed a silicon-photonic (SiPh) chip that can perform mathematical computations using light. The team turned to light as it is the fastest means of transferring data known to humanity. However, using widely abundant silicon ensures the technology can be scaled quickly. — Read More