Princeton prof. I use Twitter to share my research & commentary on surveillance capitalism, infosec, cryptocurrencies, AI ethics, tech policy, & academic life.
The whole talk by OpenAI's @E0M is fascinating and eye-opening. He says the pace of product and feature releases over the last year would have been even faster if not for the GPU shortage — they had stuff ready to ship but knew they couldn't handle the load.
OpenAI's security team noticed that a group reverse engineered and was abusing ChatGPT's internal API. Instead of shutting them down, they quickly replaced ChatGPT with CatGPT… and then lurked in the attackers' Discord to watch the chaos. Absolute legend.
Amazing thread. Reports of real-world utility, even anecdotal, are more informative to me than benchmarks. But there's a flip side. How many people put their symptoms into ChatGPT and got wrong answers, which they trusted over doctors? There won't be viral threads about those.
Nice sleuthing by @GaryMarcus about a paper that made waves for claiming that Tesla drivers are good at paying attention when on autopilot, and was then scrubbed from the Internet. Silently deleting bogus claims is not good research ethics.
Voices in the Code is a gripping story of how the U.S. kidney transplant matching algorithm was developed. The lessons are widely applicable. A must read for anyone who cares about algorithms and justice. Buy the book! And watch our book talk Monday 3pm ET
It's not just journalists who don't understand ML accuracy measurement. In a brilliant paper, @aawinecoff & @watkins_welcome explain how VCs have extremely superficial ideas about accuracy and how startup founders respond to these misinformed expectations:
This brilliant, far-too-polite article should be the go-to reference for why "follow the science" is utterly vacuous. The science of aerosol transmission was there all along. It could have stopped covid. But CDC/WHO didn't follow the science. Nor did scientists for the most part.