upcarta
  • Sign In
  • Sign Up
  • Explore
  • Search

Jim Fan

2 Followers
community-curated profile

@NVIDIA AI Scientist. @Stanford PhD. Building embodied general intelligence. Sharing hot ideas & deep dives🧵! NeurIPS Best Paper: MineDojo. Ex-Google, OpenAI

Overview Posts Content Recommendations
Popular Recent
  • Tweet
  • Article
  • Paper
  • Video
Jim Fan @DrJimFan · Nov 1, 2023
  • From Twitter

This is by far my favorite post on the AI executive order. Andrew says it best: The right place to regulate AI is at the APPLICATION layer. Requiring AI apps like healthcare, self-driving, etc. to meet stringent requirements, even pass audits, can ensure safety. But adding burdens to foundation model development unnecessarily slows down AI’s progress.

Tweet Oct 31, 2023
Laws to ensure AI applications are safe, fair, and transparent are needed. But the White House's use of the Defense Production Act—typically reserved for war or national emergencies—distorts AI through the lens of security, for example with phrases l
by Andrew Ng
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · Oct 20, 2023
  • From Twitter

Thank you Sharon for featuring Eureka in the wonderful article on VentureBeat!

Article Oct 20, 2023
New Nvidia AI agent, powered by GPT-4, can train robots
by Sharon Goldman
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
1
Share on Twitter Repost
Jim Fan @DrJimFan · Oct 20, 2023
  • From Twitter

Great minds think alike! Awesome work on your end too!

Paper Sep 21, 2023
Text2Reward: Automated Dense Reward Function Generation for Reinforcement Learning
by Tao Yu and 3 others
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · Jun 28, 2023
  • From Twitter

Everyone should read the celebrated mathematician Terence Tao's blog on LLM. He predicts that AI will be a trustworthy co-author in mathematical research by 2026, when combined with search and symbolic math tools. I believe math will be the first scientific discipline to see major breakthroughs enabled by AI, because math: â–¸ can be expressed conveniently as a coding problem. Strings are naturally first-class citizens. â–¸ can be rigorously verified by theorem provers like Lean, rather than relying on empirical results. â–¸ does not require physical experiments like biology & medicine. Robotics isn't ready yet. We are already seeing big progress: â–¸ LeanDojo from my colleagues @NVIDIAAI & @Caltech is among the first steps towards this grand challenge. â–¸ Last year, OpenAI used Lean to solve some math olympiad problems: â–¸ ChemCrow is another example, but for chemistry. It integrates GPT-4 with professional tools like molecular synthesis planner and reaction prediction: â–¸Terrance Tao's blog:

Article Jun 12, 2023
Embracing change and resetting expectations | Microsoft Unlocked
by Terence Tao
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · Jun 13, 2023
  • From Twitter

A well-written thread by Shawn on the trend of GPT becoming a proficient tool *maker*, in addition to the usual tool user mode. We used to design a shell layer on top of LLMs, but now LLMs can act as a shell "controller" on top of codebases they implement on the fly. Nice…

Tweet Jun 7, 2023
this is a trend I'm calling "Code is all you need" Comparing Bard vs @OpenAI ChatGPT vs @AnthropicAI Claude on Google's own reasoning/math prompts shows the stark contrast once you make your model write and eval code to answer questions. Reminds
by swyx.ai
Post Add to Collection Mark as Completed
Recommended by 1 person
2 mentions
Share on Twitter Repost
Jim Fan @DrJimFan · Jun 13, 2023
  • From Twitter

Very well-written thread, thanks Shawn!

Tweet Jun 7, 2023
this is a trend I'm calling "Code is all you need" Comparing Bard vs @OpenAI ChatGPT vs @AnthropicAI Claude on Google's own reasoning/math prompts shows the stark contrast once you make your model write and eval code to answer questions. Reminds
by swyx.ai
Post Add to Collection Mark as Completed
Recommended by 1 person
2 mentions
Share on Twitter Repost
Jim Fan @DrJimFan · Jun 1, 2023
  • From Twitter

Really well-written piece on the evolution of prompt engineering! We are moving from magic incantations that make no sense to a regular human, to intricate no-gradient stacks around an LLM that avoids under-specifying a task.

Tweet Jul 1, 2023
People always ask if prompt engineering is going to go away over time. My short answer is "no". But, a more nuanced answer is that the goal of prompt engineering has evolved over time: from nudging a finnicky language model to do an "easy" task (2020
by Jason Wei
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · May 27, 2023
  • From Twitter

His introductory thread:

Tweet Feb 6, 2023
Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents Can ChatGPT help with open-world game playing, like Minecraft? It sure can! 🧵👇 arxiv.org/abs/2302.01560 github.com/CraftJarvis
by Xiaojian Ma
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · May 12, 2023
  • From Twitter

Read the excellent cookbook for self-supervised learning, from the godfather of deep learning himself @ylecun. The recipes in this freely available paper will truly help your business. Not the "top 10 ChatGPT tricks you shouldn't miss". 4/

Paper Apr 24, 2023
A Cookbook of Self-Supervised Learning
by Randall Balestriero and Yann LeCun
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · May 12, 2023
  • From Twitter

Don't watch AI FOMO and fear-mongering videos on YouTube. Watch the excellent talk from John Schulman @johnschulman2, creator of RLHF that powers GPT-4. Now *this* qualifies as "insane" ingenuity, if you ask me.

Video Apr 20, 2023
John Schulman - Reinforcement Learning from Human Feedback: Progress and Challenges
by John Schulman
Post Add to Collection Mark as Completed
Recommended by 2 people
2 mentions
Share on Twitter Repost
Jim Fan @DrJimFan · May 3, 2023
  • From Twitter

Excellent blog post on the speed and convergence of GPT training runs on H100 (FP8)

Article Apr 27, 2023
Benchmarking Large Language Models on NVIDIA H100 GPUs with CoreWeave (Part 1)
by Vitaliy Chiley and Daya Khudia
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Jim Fan @DrJimFan · May 1, 2023
  • From Twitter

LLaMA spearheaded the age of community-driven midsize LMs. It’s game-changing for many use cases that don’t require GPT-4’s full firepower. Great blog post that covers Alpaca, Vicuna, Koala, WizardLM, OpenAssistant, and more!

Article Apr 27, 2023
A brief history of LLaMA models - AGI Sphere
Post Add to Collection Mark as Completed
Recommended by 1 person
1 mention
Share on Twitter Repost
Load More
  • upcarta ©2025
  • Home
  • About
  • Terms
  • Privacy
  • Cookies
  • @upcarta