No images? Click here

 

Monthly AI Newsletter from PortNLP Lab
Issue #3 | January 2025

Welcome back to explain! the PortNLP Lab's monthly newsletter. As we step into the new year, we continue our commitment to advancing responsible artificial intelligence. This month, we highlight significant developments in the AI landscape and share exciting updates from our lab.

 

In the Know

📰 U.S. Policy Shifts on AI Regulation

In a notable policy reversal, President Donald Trump has rescinded the executive order on artificial intelligence risks previously established by the Biden administration. The original order mandated that AI developers share safety test results with the government prior to public release, aiming to mitigate potential risks to national security and public safety. The revocation has sparked a debate between promoting innovation and ensuring responsible AI development.

Read more

 

📈 AI Open Source Advancement

AI lab DeepSeek has released its new R1 model family under an open MIT license, with its largest version containing 671 billion parameters. The company claims the model performs at levels comparable to OpenAI's o1 simulated reasoning model on several math and coding benchmarks.

Read more

📢 Sustainability Spotlight: AI's Carbon Footprint vs. Human Effort

AI training is highly energy-intensive, with models like GPT-3 emitting over 500 metric tons of CO₂, comparable to more than 100 gasoline-powered cars over a year. MIT technology review

However, once trained, AI can be significantly more energy-efficient than humans for tasks like writing or illustrating. AI-generated content emits 130 to 1,500 times less CO₂ per page compared to human writing and 310 to 2,900 times less CO₂ per image than traditional illustration. Nature

While AI offers efficiency gains, balancing its energy demands during training with sustainability efforts remains a challenge. Advancements in energy-efficient AI models and reliance on renewable energy sources are key to minimizing its environmental impact.
Stay tuned for more insights on responsible AI practices!

 

Spotlight on Us

We're kicking off 2025 with some exciting research and community engagement! Here’s what we’ve been up to this January:

 

🔬 Fresh Off the Press

📌 Cracking the code of Nepali idioms!
Can AI really understand the nuances of idioms in low-resource languages? We built neDIOM, a dataset that explores how AI interprets Nepali idioms—sometimes successfully, sometimes hilariously. “neDIOM: Dataset and Analysis of Nepali Idioms” by Rhitabrat Pokharel and Ameeta Agrawal made its debut at CHiPSAL, COLING 2025.

📌 What really drives multilingual models?
We always hear “more data is better,” but is that the whole story? Our latest work dives into what truly impacts multilingual model performance—spoiler: it’s more than just data size! Check out “Beyond Data Quantity: Key Factors Driving Performance in Multilingual Language Models” by Sina Bagheri Nezhad, Ameeta Agrawal, and Rhitabrat Pokharel at LoResLM, COLING 2025.

📌 Language Models quizzed with Linguistic Ambiguity
Speaking of “the father of the baby who was driving to school”—who is behind the wheel? English syntax prefers “the baby” (though the little guy doesn’t have his license yet!). It’s a battle of syntax vs. semantics in our poster “Missing the Cues: LLMs’ Insensitivity to Semantic Biases in RC Attachment” by Russell Scheinberg, So Young Lee, and Ameeta Agrawal at the Linguistic Society of America (LSA) in Jan 2025.

 

🎤 Silicon Forest Tech Summit 2025

Prof. Agrawal delivered a lightning talk exploring the role of humans as AI evolves, reminding us that as AI continues to improve, it is the human touch that shapes the narrative. The talk emphasized the importance of responsible research and education.


 

Chief Editor of the Month: Sina Bagheri Nezhad
Contributors: Ameeta Agrawal, Ekata Mitra, Rhitabrat Pokharel, Russell Scheinberg, Yufei Tao

Follow us on

X Github

PortNLP @ Portland State University
nlp.cs.pdx.edu
If you have any questions, please email us at portnlp@pdx.edu.