• Briefed.AI
  • Posts
  • đź’Ľ Education Will Be Disrupted and It Desperately Needs it

đź’Ľ Education Will Be Disrupted and It Desperately Needs it

Yo, This is Briefed. We're like the friend that just listens when you're talking about your ex for the 100th time. No judgment here.

The Singularity hasn’t happened yet so in the meantime, here’s some cool stories:

🏫ChatGPT is going to change education, not destroy it

👨‍🏫Sal Khan explains why GPT-4 is ready to be a tutor

👨‍⚕️New AI Gives Doctors Advice on Patient’s Ailments Like A Human Colleague

🏫ChatGPT is going to change education, not destroy it

When ChatGPT was first launched, it faced backlash as an AI-powered cheat machine, leading to widespread bans across schools and universities. But like any good plot twist, it turns out ChatGPT might be the hero of the story rather than the villain. As teachers and educators start to reconsider their stance, they're finding that ChatGPT could actually improve education rather than destroy it. From making lessons more interactive and engaging to promoting media literacy, ChatGPT is pushing the boundaries of traditional education. It's like the cool new kid in school everyone was initially afraid of, but now wants to hang out with.

Even educational tech companies are getting in on the ChatGPT action, with giants like Duolingo and Quizlet integrating it into their apps. Instead of stubbornly trying to ban it, educators are realizing that it's time to adapt and make the most of this AI technology in the classroom. Think about it: if ChatGPT makes it easy to cheat on an assignment, maybe it's the assignment that's the problem, not the chatbot. As educators continue to explore the potential applications of AI in education, it becomes clear that the future is bright – like ChatGPT doing your homework kind of bright.

So buckle up, because this is one AI rollercoaster ride that's just getting started.

👨‍🏫Sal Khan explains why GPT-4 is ready to be a tutor

Well, ChatGPT may have its moments of bungling basic math, but Khan Academy founder Sal Khan says the latest version of this AI smarty-pants is quite the tutor! "This tech's got some serious brainpower," Khan quipped to Axios. "It's leveling up!"

In the latest AI gossip: Khan Academy has been showing off with GPT-4 since OpenAI unleashed this bad boy. Now, two more school districts (Newark, NJ and Hobart, IN) are hopping on the Khanmigo AI tutor bandwagon. With these newbies, a whopping 425 teachers and students are putting Khanmigo to the test.

Khanmigo works just like a real-life tutor, minus the awkward silences. It takes a peek at students' work, and when they're stuck, it swoops in to save the day. It can even spot where students trip up in their reasoning, not just whether their answers are right or wrong.

Now, ChatGPT has stirred up some drama, especially in education. Some schools are giving it the boot due to its tendency to "hallucinate" and concerns about students exploiting it for their papers. But let's be honest, many critics are still using the tech on the down-low.

Khan says today's AI has the potential to give kids, rich and poor alike, a personalized education boost. "The perfect time for tutoring? When you're in the thick of it," he remarked.

But wait, there's more! Khanmigo didn't just adopt GPT-4 as is—they added their own magical touch to steer clear of math blunders. "I'd bet my calculator no one's worked harder on this than us," Khan boasted.

One crowd-pleaser lets students of all ages go head-to-head with the AI tutor in a debate. It can even lend a hand to teachers, identifying which students are acing it and which ones need a nudge.

However, like the students it tutors, Khanmigo's still got some homework to do. That's why it's sticking with a smaller crowd under Khan Labs' watchful eye. "Opting in means you know it's not perfect," he said. "But hey, it's getting good real quick!"

With earlier models like GPT 3.5, the engine could solve math problems but couldn't break them down step-by-step like GPT-4 (or any decent tutor). Khan's got big dreams for the future: covering current events and tackling problems with diagrams and graphs. But for now, those goals are still on the AI wishlist.

AI Proves It Is Better Than Humans Every Day, Today It Showed us Its Better at Creating Logos

👨‍⚕️New AI Gives Doctors Advice on Patient’s Ailments Like A Human Colleague

Doctors dabbling in AI for diagnosis? Old news. But getting them to trust these digital sidekicks? Now that's a whole other ballgame.

Cornell University researchers took a swing at this by crafting a transparent AI system that banters with doctors like a human coworker, swapping opinions on medical literature. Presenting their findings at the Association for Computing Machinery Conference on Human Factors in Computing Systems, they discovered that doctors care less about how the AI works, and more about the sources backing its suggestions.

"As doctors, we don't have time to learn AI lingo," Qian Yang, the study's lead and a Cornell information science professor, said in a press release. "Just give us the clinical trial results and journal articles, and we'll decide if the AI's on point or off the rails."

The researchers interviewed and surveyed 12 doctors and clinical librarians, finding that when these pros butt heads, they consult biomedical research and duke it out. The AI system they developed aims to mimic this process.

Yang explained, "Our system emulates the chit-chat we saw among doctors and fetches the same kind of evidence from clinical literature to back the AI's suggestions."

Based on the older, yet still snazzy GPT-3, their AI tool sports a simple interface: AI suggestions on one side, biomedical literature and study summaries on the other. So far, they've tailored it to neurology, psychiatry, and palliative care.

The doctors who tested the tool preferred the presentation of medical literature to a lecture on AI workings. But let's not get ahead of ourselves—the study had only 12 participants, hardly enough to make sweeping conclusions.

This AI doc-in-training seems to outshine ChatGPT's recent flop, where it bungled 60% of its answers in real medical scenarios. But who knows how the Cornell AI would fare under similar scrutiny?

For now, let's remember that AI tools may be handy for experienced docs, but we're light-years away from an "AI medical advisor" stealing their jobs. but maybe AGI will learn to travel faster than light?