Findings from the 2022 Artificial Intelligence and Business Strategy Global Executive Study and Research Project
New research shows that employees derive individual value from AI when using the technology improves their sense of competency, autonomy, and relatedness. Likewise, organizations are far more likely to obtain value from AI when their workers do. This report offers key insights for leaders on achieving individual and organizational value with artificial intelligence in their organizations. Read More
Monthly Archives: January 2023
AI legal assistant will help defendant fight a speeding case in court
In February, an AI from DoNotPay is set to tell a defendant exactly what to say and when during an entire court case. It is likely to be the first ever case defended by an artificial intelligence
An artificial intelligence is set to advise a defendant in court for the first time ever. The AI will run on a smartphone and listen to all speech in the courtroom in February before instructing the defendant on what to say via an earpiece.
The location of the court and the name of the defendant are being kept under wraps by DoNotPay, the company that created the AI. But it is understood that the defendant is charged with speeding and that they will say only what DoNotPay’s tool tells them to via an earbud. The case is being considered as a test by the company, which has agreed to pay any fines, should they be imposed, says the firm’s founder, Joshua Browder. Read More
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
Text-to-image models offer unprecedented freedom to guide creation through natural language. Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes. In other words, we ask: how can we use language-guided models to turn our cat into a painting, or imagine a new product based on our favorite toy? Here we present a simple approach that allows such creative freedom. Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new “words” in the embedding space of a frozen text-to-image model. These “words” can be composed into natural language sentences, guiding personalized creation in an intuitive way. Notably, we find evidence that a single word embedding is sufficient for capturing unique and varied concepts. We compare our approach to a wide range of baselines, and demonstrate that it can more faithfully portray the concepts across a range of applications and tasks. Read More
Deepfake Text Detector Tool GPTZero Spots AI Writing
A new tool is attempting to spot when text is written by ChatGPT and other generative AI engines. Princeton student and former open source investigator for BBC Africa Eye Edward Tian created GPTZero to identify deepfake text, a subject attracting a growing amount of interest in the academic and business world as the debate over how to respond to the potential misuse of AI continues.
Tian’s app processes submitted text for indicators of AI origins like randomness and complexity in how it is written, technically referred to as “perplexity and burstiness.” GPTZero was popular enough to almost immediately crash the hosting website, but you can play with it online here. … Voicebot ran multiple tests of GPTZero using six different generative AI tools, including ChatGPT, a few GPT-3 derived tools, and AI21. Tian’s creation caught the AI-generated text every time and correctly identified text written by a human in more than a dozen cases. Tian doesn’t have enough data to measure accuracy yet, though he said he is working on publishing one. Not bad for an app thrown together on New Year’s Eve. Read More
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods: using the historical growth rate and estimating the compute-optimal dataset size for future predicted compute budgets. We investigate the growth in data usage by estimating the total stock of unlabeled data available on the internet over the coming decades. Our analysis indicates that the stock of high-quality language data will be exhausted soon; likely before 2026. By contrast, the stock of low-quality language data and image data will be exhausted only much later; between 2030 and 2050 (for low-quality language) and between 2030 and 2060 (for images). Our work suggests that the current trend of ever-growing ML models that rely on enormous datasets might slow down if data efficiency is not drastically improved or new sources of data become available. Read More
#machine-learningThe era of cloud colonialism has begun
Having claimed North America and Europe, the cloud giants hope to add Latin America and Africa to their empires
OPINION When the major cloud providers warned of slowing customer demand earlier this quarter, many expected them to pull back on their capex expenditures until the latest macroeconomic headwinds had blown over. Only, they didn’t.
Week after week, the major cloud providers have pushed ahead. They’ve announced new capacity, availability zones, and regions across Central and South America and sub-Saharan Africa – all markets that have undergone an explosion of demand for cloud services over the past two years.
Amazon Web Services (AWS), Microsoft Azure, and Google Cloud overwhelmingly dominate the US and European markets – and if they have their way, they’ll control an even larger stake in these emerging markets too. Read More
Remaking Old Computer Graphics With AI Image Generation
Can AI Image generation tools make re-imagined, higher-resolution versions of old video game graphics?
Over the last few days, I used AI image generation to reproduce one of my childhood nightmares. I wrestled with Stable Diffusion, Dall-E and Midjourney to see how these commercial AI generation tools can help retell an old visual story – the intro cinematic to an old video game (Nemesis 2 on the MSX). This post describes the process and my experience in using these models/services to retell a story in higher fidelity graphics. Read More
2022 Was the Year of the Metaverse—Until It Wasn’t
The commercial potential of the metaverse was so potent that it compelled Mark Zuckerberg to rename Facebook to Meta heading into 2022. But this year, rather than rapidly redefine the internet, the metaverse stalled.
The word of the year, per the annual (and now semi-democratically awarded) designation from Oxford, is … “goblin mode.” Seriously?
What happened to “metaverse,” the distant runner-up to “goblin mode” with less than one-tenth of the votes? As recently as August, I could’ve sworn we’d never hear the end of the metaverse, the buzzword encapsulating the potential for a deeply embodied internet with unprecedented connectivity and interoperability; essentially, virtual reality. We’ve come a long way since Snow Crash, and now the metaverse is, supposedly, the very near-term future of the internet. The apparent commercial potential of the metaverse was so potent that it compelled Mark Zuckerberg to rename Facebook (parent company), if not also Facebook (website), to Meta, thus reimagining his social-media business as “a metaverse company” heading into 2022. But this year, rather than rapidly redefining the internet, the metaverse stalled, and user counts on the formative platforms have struggled to break into the tens of thousands, much less millions. Read More
Factoring integers with sublinear resources on a superconducting quantum processor
Shor’s algorithm has seriously challenged information security based on public key cryptosystems. However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities. Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA). The number of qubits required is O(logN/loglogN), which is sublinear in the bit length of the integer N, making it the most qubit-saving factorization algorithm to date. We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device. We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance. Read More
#cyber, #quantumHackers could get help from the new AI chatbot
The AI-enabled chatbot that’s been wowing the tech community can also be manipulated to help cybercriminals perfect their attack strategies.
Why it matters: The arrival of OpenAI’s ChatGPT tool last month could allow scammers behind email and text-based phishing attacks, as well as malware groups, to speed up the development of their schemes.
- Several cybersecurity researchers have been able to get the AI-enabled text generator to write phishing emails or even malicious code for them in recent weeks.
#cyber, #nlp