ChatGPT Answers Beat Physicians’ on Info, Patient Empathy, Study Finds

— Evaluators gave chatbot the better rating for responses to patient queries by a nearly 4:1 ratio

The artificial intelligence (AI) chatbot ChatGPT outperformed physicians when answering patient questions, based on quality of response and empathy, according to a cross-sectional study.

Of 195 exchanges, evaluators preferred ChatGPT responses to physician responses in 78.6% (95% CI 75.0-81.8) of the 585 evaluations, reported John Ayers, PhD, MA, of the Qualcomm Institute at the University of California San Diego in La Jolla, and co-authors.

The AI chatbot responses were given a significantly higher quality rating than physician responses (t=13.3, P<0.001), with the proportion of responses rated as good or very good quality (≥4) higher for ChatGPT (78.5%) than physicians (22.1%), amounting to a 3.6 times higher prevalence of good or very good quality responses for the chatbot, they noted in JAMA Internal Medicine. — Read More


#chatbots

How to Make ChatGPT Copy Your Writing Style

Key Takeaway: Published writers can ask ChatGPT to emulate their style by referencing existing work; anyone can submit samples of their own writing for emulation, or you can simply describe a style using plain language.

ChatGPT can generate excellent text on virtually any subject, but by default, it has a very bland (and obvious) tone. Instead of editing that text into your own style to use it, you can simply teach ChatGPT your style instead. Read More

#chatbots

What is Visual Prompting?

Landing AI’s Visual Prompting capability is an innovative approach that takes text prompting, used in applications such as ChatGPT, to computer vision. The impressive part? With only a few clicks, you can transform an unlabeled dataset into a deployed model in mere minutes. This results in a significantly simplified, faster, and more user-friendly workflow for applying computer vision.

Traditionally, building a natural language processing (NLP) model was a time-consuming process that required a great deal of data labeling and training before any predictions could be made. However, things have changed radically. Thanks to large pre-trained transformer models like GPT-4, a single API call is all you need to begin using a model. This low-effort setup has removed all the hassle and allowed users to prompt an AI and start getting results in seconds.

Similarly to what has happened in NLP, large pre-trained vision transformers have made it possible for us to implement Visual Prompting. This approach accelerates the building process, as only a few simple visual prompts are required. You can have a working computer vision system deployed and make inferences in seconds or minutes; this will benefit both individual projects and enterprise solutions. Read More

#chatbots

Visual Prompting Livestream With Andrew Ng

Read More

#chatbots, #videos

The Anatomy of Autonomy: Why Agents are the next AI Killer App after ChatGPT

“GPTs are General Purpose Technologies”1, but every GPT needs a killer app. Personal Computing needed VisiCalc, the smartphone brought us Uber, Instagram, Pokemon Go and iMessage/WhatsApp, and mRNA research enabled rapid production of the Covid vaccine.

One of the strongest indicators that the post GPT-3 AI wave is more than “just hype” is that the killer apps are already evident, each >$100m opportunities:

  • Generative Text for writing – Jasper AI going 0 to $75m ARR in 2 years
  • Generative Art for non-artists – Midjourney/Stable Diffusion Multiverses
  • Copilot for knowledge workers – both GitHub’s Copilot X and “Copilot for X
  • Conversational AI UX – ChatGPT / Bing Chat, with a long tail of Doc QA startups
I write all this as necessary context to imply:

The fifth killer app is here, and it is Autonomous Agents. Read More

#chatbots

Snapchat sees spike in 1-star reviews as users pan the ‘My AI’ feature, calling for its removal

The user reviews for Snapchat’s “My AI” feature are in — and they’re not good. Launched last week to global users after initially being a subscriber-only addition, Snapchat’s new AI chatbot powered by OpenAI’s GPT technology is now pinned to the top of the app’s Chat tab where users can ask it questions and get instant responses. But following the chatbot’s rollout to Snapchat’s wider community, Snapchat’s app has seen a spike in negative reviews amid a growing number of complaints shared on social media.

Over the past week, Snapchat’s average U.S. App Store review was 1.67, with 75% of reviews being one-star, according to data from app intelligence firm Sensor Tower. For comparison, across Q1 2023, the Snapchat average U.S. App Store review was 3.05, with only 35% of reviews being one-star. Read More

#chatbots

Enhancing Vision-language Understanding with Advanced Large Language Models

The recent GPT-4 has demonstrated extraordinary multi-modal abilities, such as directly generating websites from handwritten text and identifying humorous elements within images. These features are rarely observed in previous vision language models. We believe the primary reason for GPT-4’s advanced multi-modal generation capabilities lies in the utilization of a more advanced large language model (LLM). To examine this phenomenon, we present MiniGPT-4, which aligns a frozen visual encoder with a frozen LLM, Vicuna, using just one projection layer. Our findings reveal that MiniGPT-4 possesses many capabilities similar to those exhibited by GPT-4 like detailed image description generation and website creation from hand-written drafts. Furthermore, we also observe other emerging capabilities in MiniGPT-4, including writing stories and poems inspired by given images, providing solutions to problems shown in images, teaching users how to cook based on food photos, etc. In our experiment, we found that only performing the pretraining on raw image-text pairs could produce unnatural language outputs that lack coherency including repetition and fragmented sentences. To address this problem, we curate a high-quality, well-aligned dataset in the second stage to finetune our model using a conversational template. This step proved crucial for augmenting the model’s generation reliability and overall usability. Notably, our model is highly computationally efficient, as we only train a projection layer utilizing approximately 5 million aligned image-text pairs. Our code, pre-trained model, and collected dataset are available at https://minigpt-4.github.io/. Read More

Paper

demo links here: Link1Link2Link3Link4Link5Link6

#chatbots, #image-recognition

Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and it’s very impressive

Web LLM is a project from the same team as Web Stable Diffusion which runs the vicuna-7b-delta-v0 model in a browser, taking advantage of the brand new WebGPU API that just arrived in Chrome in beta.

I got their browser demo running on my M2 MacBook Pro using Chrome Canary, started with their suggested options:

/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ Canary --enable-dawn-features=disable_robustness

Read More

#chatbots

I am done, I can’t keep up with AI advancement

AI is stepping up every day, and it’s getting insane.
This time the curveball named Auto-GPT is here, the smarter and sassier version of ChatGPT.

And while I am curious to know whether it will replace many jobs, I still feel it will facilitate many, if we keep up with it. But it’s getting scary fast. Read More

Video

#chatbots

The AI revolution: Google’s developers on the future of artificial intelligence | 60 Minutes

Read More

#chatbots, #videos