Blog

Welcome to our blog, where we dive into captivating news articles and share exciting mini blog reports about our groundbreaking research and the events surrounding it. We’re thrilled to have a team of talented undergraduate and high school interns who are actively involved in curating and managing this platform, allowing us to connect with a wider audience. Get ready to embark on a journey of knowledge as we share our research findings and insights. Sit back, relax, and enjoy the fascinating world of science and innovation!

Jump to one of the news articles below.

 

Hello World Tutorial with Meta’s Llama 3.2

Hey there, tech explorers! Ever wanted to whip up some magical AI-generated text, like having robot Shakespeares at your fingertips? Well, today’s your lucky day! We’re here to break down a piece of Python code that lets you chat with a fancy AI model and generate text like pros. No PhDs required, we promise.

First Things First: The Toolbox

Before we can talk to the AI, we need to grab some tools. Think of it like prepping for a camping trip—you need a tent (the model) and some snacks (the tokenizer).

        
pip install transformers
        
    

This command installs the Transformers library, which is like the Swiss Army knife of AI text generation. It’s brought to you by Hugging Face (no, not the emoji—it’s a company!).

Step 1: Unlock the AI Vault

We’ll need to log in to Hugging Face to get access to their cool models. Think of it as showing your library card before borrowing books.

        
from huggingface_hub import login
login("YOUR HUGGING FACE LOGIN")
        
    

Replace "YOUR HUGGING FACE LOGIN" with your actual login token. It’s how we tell Hugging Face, “Hey, it’s us—let us in!”

Step 2: Meet the Model

Now we load the AI brain. In our case, we’re using Meta’s Llama 3.2, which sounds like a cool llama astronaut but is actually an advanced AI model.

        
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "meta-llama/Llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
        
    

Tokenizer: This breaks down your input text into AI-readable gibberish.
Model: The big brain that generates the text.

Step 3: Give It Something to Work With

Now comes the fun part: asking the AI a question or giving it a task.

        
input_text = "Explain the concept of artificial intelligence in simple terms."
inputs = tokenizer(input_text, return_tensors="pt")
        
    

input_text: This is your prompt—what you’re asking the AI to do.
tokenizer: It converts your input into numbers the model can understand.

Step 4: Let the Magic Happen

Here’s where the AI flexes its muscles and generates text based on your prompt.

        
outputs = model.generate(
    inputs["input_ids"].to("cuda"), 
    max_length=100, 
    num_return_sequences=1, 
    temperature=0.7, 
    top_p=0.9, 
)
        
    

inputs["input_ids"].to("cuda"): Sends the work to your GPU if you’ve got one.
max_length: How long you want the AI’s response to be.
temperature: Controls creativity.
top_p: Controls how “risky” the word choices are.

Step 5: Ta-Da! Your Answer

Finally, we take the AI’s response and turn it back into human language.

        
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
        
    

skip_special_tokens=True tells the AI, “Please don’t include random weird symbols in your answer.”

So, What’s Happening Under the Hood?

Here’s a quick analogy for how this works:

  • We give the AI a prompt (our input text).
  • The tokenizer translates our words into numbers.
  • The model (our AI brain) uses these numbers to predict the best possible next words.
  • It spits out a response, which the tokenizer translates back into words.

It’s like ordering a coffee at Starbucks: we place the order, the barista makes it, and voilà—our coffee is ready!

Why Should We Care?

Generative AI APIs like this are the backbone of chatbots, creative writing tools, and even marketing copy generators. Whether we’re developers, writers, or just curious, playing with this code is a great way to dip our toes into the AI ocean.

Ready to Try It?

Copy the code, tweak the prompt, and see what kind of magic we can summon. Who knows? We might create the next big AI-powered masterpiece—or at least have some fun along the way.

Now go forth and generate! 🎉

Tutorial “Hello World” con Llama 3.2 de Meta

¡Hola, exploradores tecnológicos! ¿Alguna vez quisieron crear texto mágico generado por IA, como si tuvieran a Shakespeare robótico a su disposición? ¡Hoy es su día de suerte! Aquí les explicamos un código en Python que les permitirá chatear con un modelo de IA avanzado y generar texto como unos profesionales. Sin necesidad de doctorados, se los prometemos.

Primero lo primero: Las herramientas

Antes de poder hablar con la IA, necesitamos algunas herramientas. Piensen en esto como prepararse para un viaje de campamento: necesitan una tienda de campaña (el modelo) y algo de comida (el tokenizador).

    
pip install transformers
    

Este comando instala la biblioteca Transformers, que es como la navaja suiza de la generación de texto con IA. Está desarrollada por Hugging Face (no, no es un emoji, ¡es una empresa!).

Paso 1: Acceder al mundo de la IA

Necesitamos iniciar sesión en Hugging Face para acceder a sus modelos increíbles. Es como mostrar tu tarjeta de biblioteca antes de pedir prestado un libro.

    
from huggingface_hub import login
login("TU TOKEN DE HUGGING FACE")
    

Reemplacen "TU TOKEN DE HUGGING FACE" con su token real. Es como decirle a Hugging Face: “¡Oye, somos nosotros, déjanos entrar!”

Paso 2: Conozcan el modelo

Ahora cargamos el cerebro de la IA. En este caso, usamos Llama 3.2 de Meta, que suena como un astronauta llama genial, pero en realidad es un modelo avanzado de IA.

    
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "meta-llama/Llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
    

Tokenizador: Divide su texto de entrada en una especie de código comprensible para la IA.
Modelo: El cerebro gigante que genera el texto.

Paso 3: Denle algo con qué trabajar

Ahora viene la parte divertida: hacerle una pregunta a la IA o darle una tarea.

    
input_text = "Explica el concepto de inteligencia artificial en términos simples."
inputs = tokenizer(input_text, return_tensors="pt")
    

input_text: Este es su mensaje, lo que le piden a la IA.
tokenizer: Convierte su entrada en números que el modelo puede entender.

Paso 4: Dejen que ocurra la magia

Aquí es donde la IA muestra sus músculos y genera texto basado en su mensaje.

    
outputs = model.generate(
    inputs["input_ids"].to("cuda"), 
    max_length=100, 
    num_return_sequences=1, 
    temperature=0.7, 
    top_p=0.9, 
)
    

inputs["input_ids"].to("cuda"): Envía el trabajo a su GPU si tienen una.
max_length: La longitud máxima de la respuesta de la IA.
temperature: Controla la creatividad.
top_p: Controla qué tan “arriesgadas” son las palabras elegidas.

Paso 5: ¡Listo! Su respuesta

Finalmente, tomamos la respuesta de la IA y la convertimos de nuevo en lenguaje humano.

    
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
    

skip_special_tokens=True le dice a la IA: “Por favor, no incluyas símbolos raros en tu respuesta.”

¿Qué sucede detrás de escena?

Aquí una analogía rápida para entender cómo funciona:

  • Le damos a la IA un mensaje (nuestro texto de entrada).
  • El tokenizador traduce nuestras palabras en números.
  • El modelo (el cerebro de la IA) usa estos números para predecir las mejores palabras posibles.
  • Nos entrega una respuesta, que el tokenizador traduce de nuevo en palabras.

Es como pedir un café en Starbucks: haces el pedido, el barista lo prepara y voilà, ¡tu café está listo!

¿Por qué debería importarnos?

Las APIs de IA generativa como esta son la columna vertebral de los chatbots, herramientas de escritura creativa e incluso generadores de contenido para marketing. Ya sea que seamos desarrolladores, escritores o simplemente curiosos, jugar con este código es una excelente forma de sumergirnos en el mundo de la IA.

¿Listos para intentarlo?

Copien el código, ajusten el mensaje y vean qué tipo de magia pueden invocar. ¿Quién sabe? Podrían crear la próxima gran obra maestra impulsada por IA, o al menos, divertirse un poco en el proceso.

¡Ahora vayan y generen! 🎉

Using Generative AI to Create Sustainable Business Plans: A Mini-Course

We had the incredible opportunity to deliver a mini-course at the University of Sonora on how to use generative AI to craft sustainable business plans. The session was designed to empower students and budding entrepreneurs to integrate cutting-edge AI tools, such as ChatGPT, into their business planning processes. This event showcased the practical applications of AI for innovation and sustainability, offering hands-on experience and collaborative learning.

The Course in Action

The course focused on teaching participants how generative AI can assist in every stage of business planning, including:

  • Brainstorming Ideas: Using AI to refine concepts, identify potential gaps, and generate creative alternatives.
  • Market Research: Employing AI for customer analysis, trend identification, and competitive landscape evaluation.
  • Feasibility Assessments: Exploring cost structures, revenue models, and risk mitigation strategies.
  • Customer Feedback Simulation: Generating insights by simulating customer reactions and improving marketing strategies.
  • SWOT Analysis: Leveraging AI to identify internal strengths and weaknesses and external opportunities and threats.

Participants were encouraged to experiment with AI tools to practice crafting mission statements, vision statements, and value propositions tailored to their sustainable business goals.

Collaborators Making a Difference

The course was a collaborative effort between three instructors, each bringing unique expertise to the table:

  • Dr. Saiph Savage
    A computer scientist and expert in human-centered AI, Saiph provided a technical and strategic perspective on how AI can be applied to the future of work and sustainable business practices.
  • Dr. Rafael Morales
    Originally from Mexico City and a PhD in Political Science from UNAM, Rafael brought a nuanced understanding of how to align AI-driven business strategies with government policies. He highlighted opportunities for collaboration between businesses and governments to promote social good.
  • Jesse Nava
    With extensive experience launching ventures for marginalized communities, including startups supporting low-income Hispanics and formerly incarcerated individuals in the U.S., Jesse shared real-world insights into how AI can help create inclusive, impactful business models.

Why Generative AI?

Generative AI, like ChatGPT, offers a unique advantage for entrepreneurs by providing accessible tools to:

  • Refine business ideas and strategies.
  • Perform rapid iterations to improve outcomes.
  • Enhance collaboration and creativity.
  • Develop sustainable and socially conscious plans.

The session emphasized the importance of human-centered design to ensure that AI tools remain inclusive, adaptable, and aligned with ethical practices.
Checkout the full slides here

In the mini-course, we also guided students through using generative AI as a powerful tool for developing business ideas. We taught them how to craft effective prompts to:

  • Obtain Feedback on Business Ideas
    • Generate detailed insights and suggestions to refine their business concepts.
  • Identify Potential Markets
    • Explore and analyze who their target audience could be based on their business idea.
  • Create Social Media Content
    • Develop engaging and tailored social media posts targeting their identified market to build awareness and engagement.
  • Conduct Key Business Analyses
    • Perform essential evaluations such as SWOT analyses (Strengths, Weaknesses, Opportunities, Threats) and break-even analyses to better understand the viability and financial dynamics of their business plans.

By leveraging generative AI in these areas, students gained hands-on experience in building and refining comprehensive business strategies effectively and creatively.

Gratitude and Looking Ahead

We want to thank the University of Sonora, José Montaño Sánchez, and Dr. Alma Brenda Leyva Carreras for the invitation to deliver this course. It was an honor to collaborate with a multidisciplinary group of students and share knowledge at the intersection of AI, business, and sustainability.

As we move forward, we aim to continue these efforts, fostering a deeper understanding of how AI can empower diverse communities and create a brighter, more sustainable future for all.

Keynote Speaker at the Mexican AI Conference

Caption: My father, me, and Mexican Professor Beto Ochoa-Ruiz (chair of the Mexican AI Conference) at MICAI in Puebla.

It was an incredible honor to be the keynote speaker at the Mexican AI Conference, a prestigious event with over 40 years of history organized by the Mexican Society for Artificial intelligence. I had the privilege of being a keynote speaker and presenting my research on designing worker-centric AI tools, and it was truly inspiring to see such a vibrant and thriving AI community in Mexico.

Conference Highlights

The conference itself was filled with fascinating talks and discussions. I especially enjoyed the presentation by Professor XX from the University of Toronto, who is pioneering AI systems to quantify and understand smell—an area of AI that I had not previously considered but found fascinating.

I also appreciated reconnecting with Professor Ricardo Baeza Yates, a distinguished researcher at Northeastern University. His work in establishing impactful AI labs in both industry and academia has been transformative, particularly in Latin America. His efforts with Yahoo Research have opened new pathways for research and innovation in the region, creating opportunities for countless researchers.

INAOE: A Unique Setting

The conference took place at the National Institute of Astrophysics, Optics, and Electronics (INAOE), a premier research institution in Mexico. Situated in a serene wooded area, INAOE is home to some stunning telescopes, blending cutting-edge technology with natural beauty. This unique setting added a special touch to the conference, enhancing the overall experience.

Caption: An overview of the speakers, participants, and organizers of the conference.

Special Moments with My Father

One of the most memorable aspects of this experience was attending the conference with my father, who has dedicated much of his career to AI and robotics. We drove together from Mexico City to Puebla, where the conference was held, and this journey gave us the unique opportunity to spend rich quality time together. Our conversations ranged from AI to personal reflections, making this trip a deeply meaningful experience for both of us.

Exploring the City of Puebla

Puebla is an impressive city, renowned for its rich history and architectural beauty. During our visit, I was particularly captivated by its churches, which showcase the Churrigueresque style. This ornate style is a fusion of local Indigenous art and Spanish Baroque, characterized by its elaborate decorations and intricate details. It stands as a testament to the cultural confluence that shaped Puebla’s identity.

Final Reflections

Overall, it was a privilege to be part of such a dynamic and supportive AI research community. The Mexican AI Conference not only provided a platform to share my research but also allowed me to engage with brilliant minds and immerse myself in the rich cultural and scientific landscape of Puebla.

Workshop on AI Tools for Labor at the AAAI HCOMP Conference

We recently had the honor of co-organizing a workshop at the AAAI Human Computation and Crowdsourcing Conference (HCOMP) in Pittsburgh. This workshop focused on designing AI tools for the future of work and brought together diverse perspectives and innovative ideas.

Keynote Speaker: Sara Kingsley

Caption: Sara Kingsley giving her keynote at our workshop where she explained about her research on designing human centered AI for the future of work. She is especially focused on using red-teaming techniques to conduct online audits around AI in the work place, identify biases around current AI tools, and then designing AI driven interventions to address the challenges.

One of the highlights of our workshop was having Sara Kingsley as our keynote speaker. Sara is a researcher at Carnegie Mellon University and has extensive experience working at Meta and within the US Federal government, specifically in the Secretary of Labor.

In her engaging talk, Sara shared her work using red teaming—a process where experts challenge and test systems to find vulnerabilities or weaknesses—to identify problematic job ads and content related to job advertising. She explained how she applies red teaming to ensure that job ads do not propagate harmful biases or misleading information. This approach allows her to design human-centered AI tools that can create better, more equitable AI-driven futures for workers.

This type of research is critical as it helps to identify and mitigate potential biases and harms in AI systems before they impact real users. We were especially proud to note that Sara recently won the best paper award at HCOMP’24 on this very topic. Congratulations to Sara on this well-deserved recognition! We are proud to have had her as a keynote speaker in our workshop.

Co-Design Activity with Community Partners

Caption: Our research collaborator Jesse Nava in his workforce development programs for former/current prisoners.

Another unique aspect of our workshop was the co-design activity we held with workshop participants and current and former prisoners from California’s Department of Corrections and Rehabilitation. Our community partner, Jesse Nava, joined us via call, making this session truly impactful.

During this co-design activity, participants proposed ideas for generative AI tools that could support the reintegration of former prisoners into the workforce. Jesse provided invaluable feedback on these proposals, sharing his perspective on potential harms, biases, and areas where these tools could be improved to better serve the formally incarcerated population.

This was a unique experience as it allowed us to receive direct feedback from real-world stakeholders who would be directly affected by these AI tools. The opportunity to co-design with such engaged partners highlighted the importance of including diverse voices and lived experiences in the development process.

Closing Thoughts

We concluded the workshop with a sense of excitement and renewed commitment to continue designing the future of generative AI tools together. This collaborative approach is key to creating technologies that are inclusive, fair, and genuinely supportive of the communities they aim to serve.

Thank you to everyone who participated and contributed to making this workshop a success. We look forward to future opportunities to innovate, collaborate, and create impactful AI solutions.

As an expert selected by the Mexican federal government to be part of the Global Partnership on AI (GPAI), I have had the honor of contributing to the working group on “AI for the Future of Work.” My participation in this group has been an enriching and transformative experience, especially in the context of how artificial intelligence (AI) can and should positively impact the labor market in Mexico.

During my time with GPAI, I led several key initiatives that emphasize the importance of integrating AI into the workplace. One of the most notable was securing €20,000 to fund internships focused on creating AI for workers, specifically for Mexican students. These internships not only provide development opportunities for our youth but also foster the creation of technology that can improve working conditions in our country.

In these internships, we taught students the importance of developing human-centered artificial intelligence, an approach that prioritizes the well-being and needs of people in the design and implementation of technologies. Students learned to apply these principles while working on concrete projects, such as developing intelligent assistants for the Ministry of Foreign Affairs. These assistants were specifically designed to facilitate passport processing, improving the efficiency and accessibility of these services for Mexican citizens.

Additionally, in collaboration with INFOTEC, several UNAM students were hired as interns to implement artificial intelligence solutions in various government areas. This experience was crucial for students to apply their knowledge in a real-world setting and contribute directly to the modernization of the public sector. Collaborations between government and academia, like this one, are essential for integrating cutting-edge technologies and ensuring that Mexico remains at the forefront of AI use to improve public services.

Another significant contribution was leading the development of a new AI aimed at supporting both workers and the government. This project, developed by talented UNAM students, not only demonstrates the capabilities of our youth but also positions Mexico as a leader in the creation of labor-inclusive technology.

My work with GPAI has also allowed me to lead global studies on the impact of AI in the workplace, publishing scientific articles that have contributed to the international discussion on this crucial topic. Additionally, I have had the privilege of advising senators from the United States and Mexico on how AI can transform work, ensuring that informed decisions are made to benefit workers.

Recommendations for the Mexican Federal Government

Throughout this experience, I have developed some recommendations that I consider essential for Mexico to stay at the forefront of AI integration in the workplace:

  1. Creation of International Training Programs: It is essential that Mexico invests in training our citizens in the latest trends and AI technologies at a global level. This will not only improve our internal capabilities but also strengthen our position on the international stage.
  2. Internships in AI + GovTech: Propose the creation of internship programs that combine AI with GovTech, training future leaders at the intersection of technology and governance. This will allow for a more efficient and modern public administration.
  3. Strengthening the Support Network for Mexicans Abroad: The Ministry of Foreign Affairs should promote scientific and AI connections between Mexican and international universities. These collaborations will not only facilitate the exchange of knowledge but also help our nationals abroad access strong support networks and advanced technological resources.
  4. Promotion of Government-Academia Collaborations: It is crucial to strengthen collaborations between the government and academia to create a robust ecosystem that drives the development of new artificial intelligence technologies in Mexico. These alliances will allow young talent to integrate into projects that modernize and improve public administration, ensuring that technological innovations benefit society as a whole.

My participation in GPAI has not only been an honor but also an opportunity to positively influence the future of work in Mexico. Through these recommendations, I trust that our country can continue moving towards a future where AI is a tool for growth and the well-being of all Mexicans.

Impulsando el Futuro del Trabajo en México a través de la Inteligencia Artificial: Mi Experiencia en el Global Partnership on AI (GPAI)

Como experta seleccionada por el gobierno federal mexicano para formar parte del Global Partnership on AI (GPAI), he tenido el honor de contribuir al grupo de trabajo en “AI for the Future of Work”. Mi participación en este grupo ha sido una experiencia enriquecedora y trascendental, especialmente en el contexto de cómo la inteligencia artificial (IA) puede y debe impactar positivamente el mercado laboral en México.

Durante mi tiempo en GPAI, he liderado varias iniciativas clave que subrayan la importancia de integrar la IA en el ámbito laboral. Una de las más destacadas fue la obtención de €20,000 para financiar prácticas profesionales enfocadas en la creación de IA para obreros, dirigidas a estudiantes mexicanos. Estas prácticas no solo brindan oportunidades de desarrollo a nuestros jóvenes, sino que también fomentan la creación de tecnología que puede mejorar las condiciones laborales en nuestro país.

En estas prácticas, enseñamos a los estudiantes sobre la importancia de desarrollar inteligencia artificial centrada en los humanos, un enfoque que prioriza el bienestar y las necesidades de las personas en el diseño y la implementación de tecnologías. Los estudiantes aprendieron a aplicar estos principios mientras trabajaban en proyectos concretos, como el desarrollo de asistentes inteligentes para la Secretaría de Relaciones Exteriores. Estos asistentes fueron diseñados específicamente para facilitar los trámites de pasaporte, mejorando la eficiencia y accesibilidad de estos servicios para los ciudadanos mexicanos.

Además, en colaboración con INFOTEC, se logró que varios estudiantes de la UNAM fueran contratados como internos para implementar soluciones de inteligencia artificial en diversas áreas del gobierno. Esta experiencia fue fundamental para que los estudiantes aplicaran sus conocimientos en un entorno real y contribuyeran directamente a la modernización del sector público. Las colaboraciones entre el gobierno y la academia, como esta, son esenciales para integrar tecnologías novedosas y asegurar que México esté a la vanguardia en el uso de IA para mejorar los servicios públicos.

Otra de las contribuciones significativas fue liderar el desarrollo de una nueva IA destinada a apoyar tanto a los obreros como al gobierno. Este proyecto, desarrollado por estudiantes talentosos de la UNAM, no solo demuestra la capacidad de nuestra juventud, sino que también posiciona a México como un líder en la creación de tecnología laboralmente inclusiva.

Mi trabajo en GPAI también me ha permitido liderar estudios globales sobre el impacto de la IA en el ámbito laboral, publicando artículos científicos que han contribuido a la discusión internacional sobre este tema crucial. Además, he tenido el privilegio de asesorar a senadores de Estados Unidos y México sobre cómo la IA puede transformar el trabajo, asegurando que se tomen decisiones informadas que beneficien a los trabajadores.

Recomendaciones para el Gobierno Federal de México

A lo largo de esta experiencia, he desarrollado algunas recomendaciones que considero esenciales para que México se mantenga a la vanguardia en la integración de la IA en el trabajo:

  1. Creación de Programas de Capacitación Internacional: Es fundamental que México invierta en la formación de nuestros ciudadanos en las últimas tendencias y tecnologías de IA a nivel global. Esto no solo mejorará nuestras capacidades internas, sino que también fortalecerá nuestra posición en el escenario internacional.
  2. Internships en IA + GovTech: Proponer la creación de programas de prácticas profesionales que combinen la IA con el GovTech, capacitando a los futuros líderes en la intersección entre tecnología y gobernanza. Esto permitirá una administración pública más eficiente y adaptada a los tiempos modernos.
  3. Fortalecimiento de la Red de Apoyo a Mexicanos en el Exterior: La Secretaría de Relaciones Exteriores debe impulsar la conexión científica y en IA entre universidades mexicanas e internacionales. Estas colaboraciones no solo facilitarán el intercambio de conocimientos, sino que también ayudarán a nuestros connacionales en el exterior a acceder a redes de apoyo sólidas y recursos tecnológicos avanzados.
  4. Fomento de Colaboraciones Academia-Gobierno: Es crucial fortalecer las colaboraciones entre el gobierno y la academia para crear un ecosistema robusto que impulse el desarrollo de nuevas tecnologías de inteligencia artificial en México. Estas alianzas permitirán que el talento joven se integre en proyectos que modernicen y mejoren la administración pública, garantizando que las innovaciones tecnológicas beneficien a la sociedad en su conjunto.

Mi participación en GPAI no solo ha sido un honor, sino también una oportunidad para influir positivamente en el futuro del trabajo en México. A través de estas recomendaciones, confío en que nuestro país puede seguir avanzando hacia un futuro en el que la IA sea una herramienta para el crecimiento y el bienestar de todos los mexicanos.

Dr. Savage’s Journey with the OECD’s Global Partnership on AI (GPAI)

By Dr. Savage

I’m thrilled to share my experience as an expert with the Global Partnership on AI (GPAI), an initiative launched by the Organization for Economic Cooperation and Development (OECD). The OECD is an intergovernmental organization that involves different governments from across the world, dedicated to promoting policies that improve the economic and social well-being of people globally. This journey has been both inspiring and impactful, as I work alongside brilliant minds from around the world to tackle some of the biggest challenges and opportunities AI presents. Note that each government that is part of the OECD selects experts to represent them in GPAI, and I was honored to be selected by Mexico’s federal government to be one of their AI experts

Being a GPAI expert has empowered me to: 1) create new AI solutions focused on the future of work; (2) develop recommendations for policymakers on managing AI in the workforce; (3) and travel to different parts of the world to collaborate with other GPAI experts on creating AI technologies that benefit workers and promote positive outcomes in the workforce.

My last trip took me to Paris, where I joined forces with experts to design AI solutions tailored for unions in Latin America. In this blog post, I provide information about my trip, including insights into what GPAI is and how it operates. Stay tuned for my next blog post, where I will delve deeper into the innovation workshop held in Paris and share the exciting advancements we made there.

Why GPAI Was Created

The GPAI was set up to address the rapid advancements in AI, ensuring these technologies are developed ethically and inclusively. It aims to:

  • Promote responsible AI: Ensuring AI is used for good.
  • Enhance international cooperation: Sharing knowledge and best practices globally.
  • Support sustainable development: Using AI to solve global issues like health and education.
  • Encourage innovation: Driving advancements while managing risks.

How GPAI Works

GPAI brings together experts from various countries, chosen by their governments, to collaborate on key areas like:

  • Responsible AI
  • Data Governance
  • The Future of Work
  • Innovation and Commercialization

Dr. Savage’s Role and Contributions

I’m honored to have been named a GPAI expert by Mexico’s federal government. I’m part of the working group focusing on AI for the future of work. We’re exploring how AI impacts jobs and creating strategies to ensure it benefits workers rather than displaces them.

Mini Internships for Latin America

One of the most rewarding projects I’ve been involved in is setting up mini internships for students in Latin America, including Mexico and Costa Rica. These internships teach students about human-centered design for the future of work. We’re partnering with Universidad Nacional Autónoma de México (UNAM), Universidad de Colima, and Universidad de Costa Rica.

Students are interviewing workers to understand how they use AI at work. Based on what we learn, we’re developing new AI tools to support them better.

Innovation Workshop in Paris

As part of my GPAI role, I was invited to an innovation workshop in Paris. It was an incredible experience to meet and brainstorm with leading AI experts. The insights and ideas exchanged were invaluable, and I’m excited to bring this knowledge back to our projects in Latin America and the United States. In my next blog post, I will provide more details about the Paris innovation workshop and the exciting developments that emerged from it.

The Many Futures of Design: – A Journey through UXPA 2024 🦩

by: Viraj Upadhya

If you’re passionate about User Experience (UX) and eager to stay at the forefront of the industry, the User Experience Professionals Association (UXPA) International is the place to be. UXPA supports people who research, design, and evaluate the UX of products and services, making it a hub for bright minds in UX, CX, AI, and related fields.

I had the privilege of attending UXPA’s recent conference at the beautiful Diploma Beach Resort in Fort Lauderdale. With a theme centered on “The Many Futures of Design: Shaping UX with Futures Thinking,” the event was a melting pot of innovative ideas, engaging workshops, and vibrant networking opportunities. Here’s a detailed journey through the highlights of this incredible conference.

Conference Overview

The conference kicked off with an introduction to the theme and objectives, emphasizing AI, research methods, visualizations, and UX. From June 23rd to June 27th, 2024, attendees were treated to a mix of educational sessions, interactive workshops, and delightful activities like finding the Flamingo 🦩, Four in a Row, Lego Blocks, Cornhole, and a PhotoBooth, all while enjoying the stunning Miami Beach view.

Attendee Demographics

The event attracted 300-400 attendees, ranging from students and professors to corporate professionals and startup enthusiasts, aged 22 to 60. Companies like IBM, Fable, Maze, Vanguard, and the American Health Association were present, alongside sponsors like MeasuringU, K12 Edu, Cella, and Bentley.

Day 1 Conference Overview: Workshops and Sessions

The conference offered a rich variety of workshops and sessions, each providing valuable insights and practical knowledge.

Enhancing User Research with AI by Corey Lebson

The AI Revolution in UX Research

Corey Lebson’s workshop was a deep dive into the transformative potential of AI in UX research. Attendees explored generative AI tools like Chat GPT, CLAUDE, and Microsoft Co-Pilot, which are redefining how we approach user research. Lebson demonstrated how these tools can streamline the creation of test hypotheses and analyze user data with unprecedented precision.

Key Takeaways
  • Generative AI Tools: Tools such as Chat GPT can generate insightful user feedback, making it easier to understand user needs and behaviors.
  • AI-Specific Research Tools: Platforms like LoopPanel and User Evaluation are tailored for UX researchers, offering features that enhance data collection and analysis.
  • Test Hypotheses: Lebson emphasized the importance of creating clear hypotheses to guide research, exemplified by statements like, “If the navigation used a larger font and higher-contrast color, then more users will click on the links due to its increased prominence.”

Data Visualization + UX (DVUX) for Dashboards, User Interfaces, and Presentations by Thomas Watkins

Visuals That Speak

Thomas Watkins captivated the audience with his insights into the power of data visualization in UX. He argued that sometimes, diagrams work better than traditional visualizations, especially in illustrating complex relationships.

Key Insights
  • Diagrams vs. Visualizations: Diagrams can effectively show relationships and hierarchies, making complex information more digestible.
  • Clarity in Communication: Using the right visual tool is crucial for effective communication, ensuring that data is not only presented but understood.

Measuring Design Impact Through a UX Metric Strategy by William Ryan

Quantifying UX Success

William Ryan’s workshop focused on the often elusive task of measuring design impact. By introducing a UX metric strategy, Ryan provided a roadmap for evaluating the effectiveness of design choices through concrete metrics.

Key Takeaways
  • Learnability Metrics: Assessing how quickly users can learn to navigate a product is essential for understanding usability.
  • Task Completion Times: Measuring how long it takes for users to complete tasks can highlight areas for improvement.
  • User Segmentation: Differentiating between novice and expert users allows for more tailored UX strategies.

Visual Storytelling for Research Impact: UX Research Models & Frameworks by Sophia Timko

The Art of Visual Storytelling

Sophia Timko’s session on visual storytelling was a masterclass in presenting research findings. She underscored the importance of clear and compelling visuals to convey research insights effectively.

Key Takeaways
  • Best Practices: Timko shared frameworks and models for visual storytelling that enhance the impact of UX research.
  • Effective Communication: The session emphasized the need for visuals that not only present data but tell a story, making complex research accessible and engaging.

AI TwoWays: Enhancing Human-AI Interaction

Bridging the Communication Gap

This intriguing session delved into the dual nature of AI explanations. The focus was on how AI can enhance human understanding and trust by providing clear, contextually relevant explanations.

Key Takeaways
  • User-Centric AI: Designing AI systems that dynamically explain their logic helps build user trust.
  • Two-Way Communication: AI systems should not only provide answers but also seek to understand user intent, creating a more interactive and intuitive experience.

Building Connections: Trust, Culture & Engagement in Remote Teams by Lauren Schaefer

Cultivating Remote Team Excellence

Lauren Schaefer’s workshop addressed the unique challenges of leading remote design teams. Her insights into building trust and fostering engagement were particularly relevant in today’s increasingly remote work environment.

Key Practices
  • Team Rituals: Regular team-building activities, such as virtual games and personality assessments, help strengthen remote teams.
  • Culture of Trust: Schaefer emphasized the importance of creating a culture where team members feel valued and connected, even from afar.

Data Visualization: The Good, the Bad, and the Dark Patterns by Douglas Johns & Andrea Sanny

Navigating the Data Visualization Landscape

Douglas Johns and Andrea Sanny’s session was a journey through the landscape of data visualization, highlighting both best practices and common pitfalls.

Key Insights
  • Good Practices: Effective data visualizations are clear, accessible, and honest. Annotations, appropriate use of whitespace, and consideration of accessibility are crucial.
  • Dark Patterns: The session warned against misleading visualizations, which can distort data and misinform stakeholders.

Behind the Bias: Dissecting Human Shortcuts for Better Research & Designs by Lauren Schaefer

Understanding Cognitive Biases

Lauren Schaefer’s workshop explored the impact of cognitive biases on user research and design. She provided strategies for mitigating these biases to ensure more accurate and inclusive research outcomes.

Key Takeaways
  • Recognizing Biases: Understanding common cognitive biases helps in designing better research protocols.
  • Inclusive Research: Ensuring diverse participant pools and employing mixed research methods can mitigate the effects of bias.

Building Generative AI Features for All – Panel Discussion

Inclusive AI Design

The panel discussion on building generative AI features brought together diverse voices from Google’s product team. The panelists shared their experiences and strategies for creating inclusive and accessible AI products.

Key Insights
  • Diverse User Needs: Designing for a wide range of user identities, including age, race, gender, and disability, is essential for creating inclusive products.
  • Adapting Processes: The panel highlighted the importance of continuously adapting design processes to meet the evolving needs of diverse users.

Creating Healthier Team Functionality & Product Team Alignment Through Play

This interactive workshop focused on enhancing team dynamics and product team alignment through the use of play and embodied experiences. The session highlighted how incorporating playful activities can build trust and strengthen relationships within cross-functional teams, leading to more effective collaboration and alignment.

Key Insights
  • Role of Play: Playful activities can break down barriers and foster trust among team members, making it easier to align on goals and overcome conflicts.
  • Embodied Experiences: Engaging in physical, interactive exercises can create lasting bonds and improve team cohesion.
  • Long-Term Impact: The effects of these activities can last for several months, enhancing overall team functionality and productivity.
Takeaways
  • Implement interactive and playful activities in team meetings, readouts, and offsite sessions to build stronger connections and improve alignment.
  • Use these activities to address and resolve conflicts, making collaboration more effective and enjoyable.

Ethical UX: What 96 Designers Taught Us About Harm

This session explored the ethical challenges faced by UX designers and the potential harm that can arise from design decisions. By analyzing insights from a survey of 96 UX and product designers, the talk aimed to foster a deeper understanding of design ethics and provide actionable steps for integrating ethical considerations into design practices.

Key Insights
  • Design Ethics: Navigating the ethical landscape of UX design is crucial for avoiding harm and ensuring user-centered practices.
  • Categories of Harm: Understanding different types of harm can help designers anticipate and mitigate negative impacts of their work.
  • Shared Language: Developing a common language around ethical design practices can improve communication and awareness within teams.
Takeaways
  • Adopt ethical design practices by identifying potential sources of harm and addressing them proactively.
  • Utilize provided templates and actionable steps to integrate ethical considerations into design workflows.

Measuring Tech Savviness: Findings from 8 Years of Studies and Practical Use in UX Research

This session presented findings from a long-term study aimed at measuring tech savviness in users. The research focused on developing a reliable metric for assessing tech savviness, which helps differentiate between user abilities and interface issues.

Key Insights
  • Tech Savviness Metric: A validated measure of tech savviness can provide valuable insights into user capabilities and interface usability.
  • Research Approaches: The study utilized various methods, including technical activity checklists and Rasch analysis, to refine the metric and validate its effectiveness.
  • Predictive Validation: The metric explained a significant portion of the variation in task completion rates, demonstrating its practical utility.
Takeaways
  • Apply the tech savviness measure in UX research to better understand user capabilities and improve interface design.
  • Use the findings to inform design decisions and enhance the overall user experience.

Menu Mania: What’s Wrong with Menus and How to Fix Them

This presentation addressed common issues with menu design in websites and applications. It reviewed best practices for creating effective menus, including mega menus, context menus, hamburger menus, and more, drawing from case studies of successful redesigns.

Key Insights
  • Menu Design Challenges: Menus are crucial for navigation but can be frustrating if not designed properly. Common issues include scaling, usability with hover and flyout menus, and accessibility.
  • Best Practices: Effective menu design involves understanding user needs and implementing best practices to improve usability and accessibility.
Takeaways
  • Apply best practices for menu design to enhance user navigation and reduce frustration.
  • Consider case studies and examples to guide your own menu redesigns and address common design challenges.

Case Studies: Advocating for Qualitative UX Research

This session provided a comprehensive guide to advocating for qualitative UX research. It covered techniques for persuading stakeholders of the value of qualitative research and shared strategies for overcoming objections and building a strong business case.

Key Insights
  • Advocacy Techniques: Effective persuasion and storytelling are key to convincing stakeholders of the importance of qualitative research.
  • Overcoming Objections: Address common resistance to qualitative research by demonstrating its impact on user experience and business outcomes.
  • Building a Business Case: Create a compelling case for qualitative research by aligning it with organizational goals and securing necessary resources.
Takeaways
  • Utilize persuasion techniques and storytelling to advocate for qualitative research within your organization.
  • Develop a robust business case for qualitative research to ensure it is valued and supported in design processes.

Interactive Networking Methods

The conference also featured several interactive networking methods to foster connections:

  • Flash Cards: Questions like “What would your ideal job workplace look like?” and “Who do you look up to in this decade?” sparked engaging conversations.
  • LEGO Building Exercise: Encouraged creative collaboration.
  • Games: Activities like CornHole, Connect4, and Karaoke helped attendees bond in a relaxed setting.

My Takeaway: A good conversation starter is always, “Hi I am intro What do you think about …..” A topic you would like to discuss.

Get Involved with UXPA

Interested in joining UXPA? Here are some ways to get involved:

  • Attend Conferences: Engage with the community and learn from experts.
  • Participate in Workshops: Enhance your skills through hands-on learning.
  • Network: Build connections with professionals from various industries.
  • Contribute to Publications: Share your insights and experiences.

Get Involved with UXPA

Interested in joining the UXPA community? Membership offers access to a wealth of resources, including educational events, publications, and career resources. Whether you’re a seasoned professional or just starting in UX, UXPA provides valuable opportunities to learn, grow, and connect with like-minded individuals.

To learn more about UXPA and become a member, visit their website.

Conclusion

UXPA 2024 was a celebration of innovation, collaboration, and the future of UX. From insightful workshops and sessions to enjoyable networking activities, the conference offered something for everyone. As

we continue to navigate the evolving landscape of UX, the connections and knowledge gained at UXPA will undoubtedly shape the future of design.

Note: I have done detailed synthesis about each of these sessions. To learn more about any of the interested sessions email me upadhyay.v@northeastern.edu

Lessons on Polarization from Global Leaders at the Ford Foundation

I recently had the privilege of attending an event hosted by the Ford Foundation and the Institute for Integrated Transitions (IFIT) in their New York offices. This convening brought together global leaders to share lessons on polarization, aiming to enhance our understanding and strategies concerning the challenges the United States currently faces in this area.

The event kicked off with an insightful introduction by Hilary Pennington, the Executive Vice President of Programs at the Ford Foundation. Hilary set the stage by discussing the importance of this gathering, especially in today’s rapidly polarizing world.

Panel Discussion

A panel moderated by Mark Freeman, the Executive Director of IFIT, featured an impressive lineup of speakers who provided firsthand accounts of dealing with polarization in their countries. The panel included:

  • General Óscar Naranjo (Colombia) — a renowned former Director General of the Colombian National Police, General Naranjo was a lead negotiator in the Colombian government’s peace talks with the FARC and went on to serve as Minister for Post-Conflict and then Vice President of the Republic. He also participated in intelligence operations that led to the death of Pablo Escobar, the boss of the Medellín Cartel.
  • Hon. Ms. Ouided Bouchamaoui (Tunisia) — a prominent national business leader, Ms. Bouchamaoui was awarded the Nobel Peace Prize in 2015 for her leadership in the Tunisian Quartet that prevented a civil war and helped usher in the country’s modern constitution.
  • President Chandrika Kumaratunga (Sri Lanka) — Sri Lanka’s first and only female Executive President, for eleven years she led the country during its brutal civil war, including surviving an assassination attempt, before later serving as Chairperson of the Office for National Unity and Reconciliation.
  • Rev. Dr. Samuel Kobia (Kenya) — the first General Secretary of the World Council of Churches to be elected from Africa, Rev. Dr. Kobia served as ecumenical special envoy to Sudan and as Senior Advisor to Kenya’s President, before assuming his current role as Chairman of Kenya’s National Cohesion and Integration Commission.
  • Hon. Ms. Monica McWilliams (UK) — cofounder of the Northern Ireland Women’s Coalition cross-community political party and its lead negotiator in the peace talks that led to the 1998 Good Friday Agreement, Ms. McWilliams later served as Chief Commissioner of the Northern Ireland Human Rights Commission from 2005-2012.

Each leader shared moving stories and lessons from their experiences in creating peace agreements and navigating through intense national crises. They discussed how polarization often feels like a race to the bottom, highlighting the difficulty of recognizing when societies have hit rock bottom and the critical need for action to change the prevailing culture of division.

Insights from Monica McWilliams

One poignant moment was when Ms. McWilliams shared how personal losses due to polarization pushed her towards realizing the urgent need for cultural and systemic change. This resonated deeply with me, especially considering my current research on AI tools for incarcerated individuals, emphasizing the necessity of including diverse voices in dialogue and policy-making to combat polarization effectively.

General Naranjo’s Approach

General Naranjo emphasized the critical need to not tolerate violence and to enhance the visibility of victims. This resonates deeply with ongoing initiatives in Mexico to commemorate victims of violence, exemplified by the recent erection of statues throughout the city to honor women who have disappeared or been murdered. Such actions are crucial for cultivating a culture that decisively rejects violence and prioritizes understanding over victory. This theme of visibility also aligns with our research on sousveillance tools for workers, through which we help them document and quantify workplace harms. Promoting visibility is an effective strategy to combat polarization and violence, ensuring that victims receive the recognition they deserve. I am proud to contribute to this important research on sousveillance.

Duet by US Leaders

The event also featured a “duet by US Leaders,” with Ai-Jen Poo from the National Domestic Workers Alliance and Brian Hooks from Stand Together, who discussed the role of fear in fueling polarization and the potential of caregiving as a central strategy to counteract this through enhancing well-being and fostering mutual respect.

Lunch Discussions

Lunch was not just a meal but an extension of the learning environment, with table discussions that allowed us to dive deeper into strategies for building trust and understanding across different communities. This was particularly enlightening as we shared strategies on engaging with rural communities in the US, recognizing common goals, and avoiding divisive topics.

This convening by the Ford Foundation and IFIT was not only timely but also a crucial reminder of the ongoing need for dialogue, understanding, and proactive efforts to address polarization. It has inspired me to think more critically about how we can apply these global lessons to the US context and beyond, particularly through my work with AI and community engagement. The connections made and insights gained will undoubtedly influence my approach to research and activism moving forward.

Insights from the Premiere Scientific Conference on HCI: CHI 2024

My research lab and I had the privilege of attending and presenting our research at the premiere scientific conference in human computer interaction (CHI’24), which was held in Hawaii this year.

This premier Human-Computer Interaction (HCI) conference showcased a plethora of innovative research focused on enhancing the interaction between humans and technology. In this blog post, I am thrilled to share a deeper dive into several studies that align and inspire our research on designing empowering tools for gig workers.

Our Research: Worker-Centric Sousveillance Tools

Before we dive into the interesting new research we heard about at CHI’24, I would like to share a bit about what my lab was proud to present at the conference! We presented our new research on designing sousveillance tools for gig workers, which via interviews and co-design sessions identified how gig workers imagined and desired tools that would allow them to collect their own data about their workplace, as well as any concerns gig workers could have about such technology. You might be wondering, why do gig workers need such type of tools? A problem that exists is that within gig platforms there is an information asymmetry problem where workers have less access to information about their workplace than other stakeholders within the platform. For example, workers on gig platforms normally cannot see if they are earning less than others workers or if low wages is the norm on the platform. Similarly, workers usually cannot easily share information about their clients to alert each other of when a client is a fraudster. Gig platforms have been designed in a way where workers are usually in the dark about what is happening in their workplace. This was why it was important for us to think about how tools that would allow workers to have access to their own workplace data should look like and in a way that was worker centric. Note that we used the term: “sousveillance” to refer to this technology as sousveillance is about the people without power (in this case workers) being able to conduct surveillance over those who have power (e.g., their algorithmic bosses.) This term contrasts with surveillance which is about people in power monitoring those who do not have power (e.g., bosses monitoring workers). My students: undergraduate Maya De Los Santos and PhD student Kimberly Do were who presented our research. I am very proud of them and the research they conducted with my students, Dr. Michael Muller and myself.

Link to our paper

Relevant Research Highlights

CHI’24 offered a range of presentations of scientific papers that enriched our understanding of HCI’s role in labor dynamics (an important aspect of our research), each bringing unique insights that intersect with our research goals. Some of these papers include:

  • Self-Tracking in the Gig Economy: From The Pennsylvania State University, researchers delved into how gig workers engage in self-tracking to manage their responsibilities across different identities. This study provides a nuanced view of the self-surveillance gig workers perform to balance personal and platform demands, complementing our research on external surveillance.
    Paper link
  • AI and Worker Wellbeing: A study by Northeastern University and the University of Chicago examined the resistance and acceptance of AI systems that infer workers’ wellbeing from digital traces. This research is crucial as we consider ethical implications in our sousveillance tools, ensuring they support rather than undermine worker autonomy.
    Paper link
  • Interaction Challenges with AI in Programming: Insights from Wellesley College and Northeastern University into how beginning programmers interact with AI in coding presented an interesting parallel to our work. Understanding these interaction barriers helps inform our design of more intuitive interfaces for gig workers interacting with AI tools.
    Paper link
  • Designing with Incarcerated Workers: The University of California, Irvine shared compelling work on using participatory design with marginalized groups, like recently incarcerated youth, to create mixed reality tools. Their approach underscores the value of involving underrepresented communities in design processes, a principle central to our research ethos. We have also recently been able to start working with incarnated individuals in California, such as Jesse Nava. This research from UC Irvine helped us to start to identify how we could potentially conduct co-design sessions with prisoners. We are looking forward to continuing this research.
    Paper Link
  • Temporal Flexibility and Crowd Work: Research from University College London highlighted the constraints on crowdworkers’ temporal flexibility, underscoring similar challenges faced by gig workers in managing work schedules under rigid platform algorithms. This research was especially relevant for other research we are conducting on understanding how workers’ manage their time and how we can best design tools that support their different temporal preferences, as well as understand when the platform might be forcing onto workers certain time constraints that are unnecessary and that hurt workers.
    Paper Link
  • Data Labeling and AI Interventions in Crowdsourcing: A study from the University of Washington introduced ‘LabelAId’, a tool that uses AI to improve the quality and knowledge of crowdworkers performing data labeling. This aligns with our interest in tools that enhance worker capabilities and autonomy.
    Paper link
  • Cognitive Behavioral Therapy-Inspired Digital Interventions: The University of British Columbia’s exploration of therapy-inspired digital tools for knowledge workers tackled the balance between productivity and well-being, a balance we aim to address in gig work environments.
    Paper link

Expanding Our Horizons

These presentations and papers not only expanded our understanding of the challenges faced by workers in the gig economy but also illustrated the breadth of opportunities for HCI research to intervene positively. Each study provided valuable insights into different aspects of how technology interfaces with labor dynamics, from enhancing worker autonomy to addressing systemic issues through design.

Looking Forward

Inspired by the innovative ideas and critical discussions at CHI’24, we are excited to continue refining our projects. The conference has invigorated our commitment to developing HCI solutions that genuinely empower workers and contribute positively to the broader discourse on labor and technology.

Stay tuned for more updates as we apply these enriched perspectives to our ongoing and future research projects, continuing to advocate for and develop technologies that uphold the dignity and rights of workers.

Into the World of AI for Good:

Reflections on My First Week in the Civic AI Lab

By: Undergraduate Researcher Liz Maylin from Wesley College.

Last Tuesday, I began my journey in the Civic A.I. Lab at Northeastern University with a mix of excitement, curiosity, and gratitude for the experience. A special thanks to Professor Eni Mustafaraj (Wellesley College) and Dr. Saiph Savage (Northeastern) for this opportunity. I embarked my first week in this transformative space, eager to learn and contribute to the lab, which studies problems involving people, worker collectives, and non-profit organizations to create systems with human centered designs to address these problems. Some of the objectives of the lab include fighting against disinformation and creating tools in collaboration with gig workers. Previous projects include designing tools for latina gig workers, systems for addressing data voids on social media, and a system for quantifying the invisible labor of crowd workers. Nestled at the crossroads of Human-Computer Interaction, Artificial Intelligence, and civic engagement, the research of this lab is thoughtful and resoundingly impactful.
I am honored to join a project that will support workers in their collective bargaining efforts.

First Impressions at Northeastern

During my first days, the differences between Wellesley College and Northeastern stood out to me the most. Wellesley College is located 12 miles outside of Boston in the extremely quiet, wealthy town of Wellesley whereas Northeastern is located directly in the city, allowing for greater access and a larger community. I get to walk past Fenway Park, various restaurants, boba shops, a beautiful park, and the Museum of Fine Arts on my commute! It is definitely a change of setting but I am happy for the experience. So much to explore!
First weeks are exciting because there is so much to learn and new people to meet. The research team is full of amazing, talented students that I am excited to collaborate with and learn from. As I prepare for the application process for graduate school, I am fortunate to gain insight into the lives of PhD and Master’s students that will help me make informed decisions for my own academic journey. Everyone has been very welcoming and helpful, I am thrilled to spend this summer with them.

Exploring Gig Work and Participatory Design

I have spent most of my first week getting familiar with gig work and participatory design through literature review. Gig work is a type of employment arrangement where individuals perform short-term jobs or tasks. This work includes independent contractors, freelancers, and project based work. Often, gig work is presented as the opportunity to “be your own boss” and “ to work on your own time”, however this line of work comes with challenges such as irregular income, limited job security, and typically no benefits. The use of digital platforms has facilitated connection between workers and employers; however, there is room for improvement that will benefit both users and platforms. Participatory design is a method that includes stakeholders and end-users in the process of designing technologies with the goal of creating useful tools or improving existing ones. For example, researchers at the University of Texas at Austin held sessions with drivers from Uber and Lyft to reimagine a design of the platform that would center their well-being. It’s fascinating work that unveils different solutions and possibilities capable of reconciling stakeholder and worker issues.

Learning about Data Visualization

Additionally, I have been getting acquainted with different forms of data visualization. I have some experience programming with python but usually for problem sets or web scraping so I was filled with anticipation to acquire a new skill. Specifically, I have focused on working on text analysis. With the help of tutorials, google, and Viraj from our lab, I was able to make a wordcloud that showed the most frequent words in a dataset that included reviews of women’s clothes from 2019 (shown below).

Through this process, I was able to learn about various resources such as Kaggle and datacamp that provide datasets and tutorials to practice working with data. I originally tried using the NLTK library but I had several problems with my IDE (VSCode). With some troubleshooting help from lab members, I switched my approach to just using pandas, matplotlib, and wordcloud. I am happy I got it working and I’m looking forward to refining this skill.
As I wrap my first week, I am beyond excited for the opportunities that lie ahead. This experience has ignited a passion for leveraging technology for civic engagement. I am grateful for the warm welcome, the technical help, and the inspiring conversations from this week. I am eager to collaborate and contribute to the work of the lab. 🙂

Starting My High School Summer Internship at the Northeastern Civic AI Lab

By: High school Intern Simon Juknelis

Intro

Hi! I’m Simon Juknelis, a rising high school senior at Noble and Greenough in Dedham, MA, and this week, I am beginning my work as an intern at the Civic AI Lab at Northeastern University. I’ve always been interested in building projects that help other people and make an impact, and that’s what I hope to achieve over the course of this internship.

The lab’s overall mission is to build new technology solutions that create equitable positive impacts and that empower all members of society. To achieve this goal, the lab works with non-profit organizations such as the National Science Foundation and UNESCO as well as tech industry leaders such as Twitch and Meta. The lab has done research into a large number of areas such as preventing disinformation using AI and data labeling work, and I’m very excited to get to work with this team!

Participatory Design

Our lab’s work over the coming months will involve building software solutions for use by and to benefit gig workers. As such, I read up on the methodological framework of participatory design. Participatory design encapsulates the idea of giving the users of a product the power to shape it to fit their needs. Participatory design can be carried out with interviews and workshop sessions with a sample of potential users of the product. The future users should be given the ability to give suggestions during the ideation phase of the product as well as at various stages throughout its development.
Our lab will be using participatory design over the next few months in order to conduct our research and build solutions. As we work on finding research participants and setting up interviews, I decided to test my technical skills by building a small web plugin called ProductWords, which allows users to look through Amazon products, add them to a list, and see statistics about them.

Participatory Design for Gig Workers

In fields like gig work, there can often be a large power imbalance between a single worker, on one hand, and the corporate clients and work platforms, on the other hand, that provide the gig worker’s income. As such, when tools are designed for gig workers, they are often designed without gig workers’ actual needs in mind, and instead are designed by the platforms based on what they think the workers need or even based on what would benefit them or the clients. As such, participatory design is an important tool to ensure that tools built for gig workers actually benefit those workers.

Designing Tools for Gig Workers with Figma

Part of our lab’s work will involve using a software platform called Figma for UI/UX design. One of the main benefits of using a service like Figma is the collaborative benefits it provides. It allows for ideas about interface layout, animations, and functionality to be more easily communicated between team members, and multiple team members can work together on the same files to create a unified design workspace.
I used Figma to design the interface for ProductWords. Doing it this way was especially helpful because I have not yet finished implementing all of these visual elements into ProductWords, but I still have a good sense of what I want the final product to look like and I’ll be able to look back on this Figma doc to see what I should implement.

Data Visualizations for Gig Workers

Our lab is also planning on making extensive use of data visualization in our research on the gig economy. One common and easy-to-understand form of data visualization is the word cloud, which displays words at a size corresponding to their frequency in a given text. One of the resources our team was using described how to create a word cloud using a Python library; however, as I was building ProductWords as a web plugin, I needed to find a way to do this with JavaScript.
I found a JavaScript library called D3 which is a general-purpose solution for creating visual representations of data to be displayed on web pages. Combined with an extension for D3 created by Jason Davies, I was able to create word clouds based on the descriptions of the Amazon products in the list.

Other Technical Aspects

One of the main reasons I decided to make this web plugin was that I wanted to practice some of the features of web plugins that we might want to use for our lab’s research. With ProductWords, I implemented web scraping (pulling the Amazon item description and price information), a popup page, and communication between the web-scraper background script and the popup script.
ProductWords is not a very useful plugin yet, but I got some good practice implementing the features that will probably be necessary for any version of the web plugin(s) our lab will work on, and maybe it could even be used as a jumping-off point that gets evolved into our final product.

Human Centered AI Live Stream: Sota Researcher!

Girl in a jacket

Research engineer Phil Butler from our lab is starting a new live stream on Human Centered AI. Through his live stream he will help you design and implement AI for people.

  • In each live stream you will learn how to design and create AI for people from start to finish. He will teach you how to use different design methologies such as mockups,storyboards, service design; as well as different AI models and recent state of the art techniques. In each live stream you will have code that will help you to have a complete AI for people project.
  • Some of the topics he will cover in his live stream include: Understanding and Detecting Bias in AI; Design principles for Designing Fair and Just AI; How to Create Explainable AI.
  • The streams will benefit anyone who wants to learn how to create AI on their own, while also respecting human values.
  • The stream will help people to learn about how to implement AI using state of the art techniques (which is key for getting top industry jobs), while also being ethical and just about the AI that is created.

Join us! https://www.youtube.com/@sotasearcher

Designing Public Interest Tech to Fight Disinformation.

By: Victor Storchan

Our research lab organized a series of talks with NATO around how to design
public interest infrastructure to fight disinformation globally. Our
collaborator Victor Storchan wrote this great piece on the topic:

Disinformation has increasingly become one of the most preeminent threats as well as a global challenge for democracies and our modern societies. It is now
entering in a new era where the challenge is two-fold: it has become both a socio-political problem and a cyber-security problem. Both aspects have to be
mitigated at a global level but require different types of responses.

Let’s first give some historical perspective.

  • Disinformation didn’t emerge with automation and social networks platforms of
    our era. In the 1840s Balsac was already describing how praising or denigrating
    reviews were spreading in Paris to promote or downgrade publishers of novels or
    the owners of theaters. Though, innovation and AI in particular gave rise to
    technological capabilities to threat actors that are now able to scale
    misleading content creation.
  • More recently, in the 2000s the technologists
    were excited about the ethos around moving fast and breaking things. People were
    basically saying “let’s iterate fast, let’s shift quickly and let’s think about
    the consequences later.”
  • After the 2010s, and the rise of deep learning
    increasingly used in the industry, we have seen a new tension emerging between
    velocity and validation. It was not about the personal philosophy of the
    different stakeholders asking for going “a little bit faster” or “a little bit
    slower” but rather about the cultural and organizational contexts of most of the
    organizations.
  • Now, AI is entering the new era of foundation models. With
    large language models we have consumer-facing powered tools like search engines
    or recommendation systems. With generative AI, we can turn audio or text in
    video at scale very efficiently. The technology of foundation models is at the
    same time becoming more accessible to users, cheaper and more powerful than
    ever. It means better AI to achieve complex tasks, to solve math problems, to
    address climate change. However, it also means cheap fake media generation
    tools, cheap ways to propagate disinformation and to target victims.

This is the moment where we are today. Crucially, disinformation is not only a
socio-political problem but also a cyber-security problem. Cheap deep-fake
technology is commoditized enabling targeted disinformation where people will
receive specific, personalized disinformation through different channels (online
platforms, targeted emails, phone). It will be more fine grained. It has already
started to affect their lives, their emotions, their finances, their health.

The need for a multi-stakeholder approach as close as possible to the AI
system design.
The way we mitigate disinformation as a cyber-security problem is
tied to the way we are deploying large AI systems and to the way we evaluate them. We need new auditing tools and third parties auditing procedures to make sure that those deployed systems are trustworthy and robust to adverse threats or to toxic content dissemination. As such AI safety is not only an engineering problem but it is really a multi stakeholder challenge that will only be addressable if non-technical parties are included in the loop of how we design the technology. Engineers have to collaborate with experts in cognition, psychologists, linguists, lawyers, journalists, civil society in general). Let’s give a concrete example: mitigating disinformation as a cyber security problem means protecting the at-risk user and possibly curing the affected user. It may require access to personal and possibly private information to create effective counter arguments. As a consequence it implies arbitrating a tradeoff between privacy and disinformation mitigation that engineers alone cannot decide. We need a multi stakeholder framework to arbitrate such tradeoffs when building AI tooling as well as to improve transparency and reporting.

The need for a macroscopic multi-stakeholder approach. Similarly, at a macroscopic level, there is a need for a profound global cooperation and coalition of researchers to address disinformation as a global issue. We need international cooperation at a very particular moment in our world which is being reorganized. We are living in a moment of very big paradox: we see new conflicts that emerge and structure the world and at the same time, disinformation requires international cooperation. At the macroscopic level, disinformation is not just a technological problem, it is just one additional layer on top of poverty, inequality, and ongoing strategic confrontation. Disinformation is one layer that adds to the international disorder and that amplifies the other ones. As such, we also need a multi stakeholder approach bringing together governments, corporates, universities, NGOs, the independent research community etc… Very concretely, Europe has taken legislative action DSA to regulate harmful content but it is now clear that regulation alone won’t be able to analyze,detect, and identify fake media. To that regard, the Christchurch call to action summit is a positive first step but did not lead yet to a systemic change.

The problem of communication. However, the communication between engineers, AI scientists and non-technical stakeholders generates a lot of friction. Those multiple worlds don’t speak the same language. Fighting disinformation is not only a problem of resources (access to data and access to compute power) but it is also a problem of communication where we need new processes and tooling to redefine the way we collaborate in alliance to fight disinformation. Those actors are collaborating in a world where it is becoming increasingly difficult to understand AI capabilities and as a consequence to put in place the right mechanisms to fight adverse threats like disinformation. It is more and more difficult to really analyze the improvement of AI. It is what Gary Marcus is calling the demoware effect: a technology that is good for demo but not in the real world. It is confusing people and not only political leaders but also engineers (Blake Lemoine at Google). Many leaders are assuming false capabilities about AI and struggle monitoring it. Let us give two reasons to try to find the causes of this statement. First, technology is more and more a geopolitical issue which does not encourage more transparency and more accountability. Second, information asymmetry between the private and public sectors and the gap between the reality of the technology deployed in industry and the perception of public decision-makers has grown considerably, at the risk of focusing the debate on technological chimeras that distract from the real societal problems posed by AI like disinformation and the ways to fight it.

Girl in a jacket

Recap HCOMP 2022

This week we attended the AAAI conference on Human Computation and Crowdsourcing (HCOMP’22). We were excited about attending for several reasons: (1) we were organizing HCOMP’s CrowdCamp, excited about having the power to drive the direction of this event within the conference!, (2) it was the 10-year anniversary of the conference and we were elated to reflect collectively on where we have come as a field over the years, (3) we chaired one of the keynotes of HCOMP, in particular, our PhD hero, Dr. Seth Cooper, and (4) we had an important announcement to share with the community!

WE WILL BE GENERAL CO-CHAIRS OF HCOMP’23!

Organizing CrowdCamp.

This year, Dr. Anhong Guo from the University of Michigan and me had the honor of organizing HCOMP’s CrowdCamp, a very unique part of the HCOMP conference. It is a type of mini hackathon where you get together with crowdsourcing experts and define the novel research papers and prototypes that push forward the state of the art around crowdsourcing. Previous CrowdCamps led to key papers in the field, such as the Future of Crowd Work paper and my own CHI paper on Subcontracting Micro Work.

This year, when we put out the call for CrowdCamp, we witnessed an interesting dynamic. A large number of participants were students, novices to crowdsourcing, but they had great interest in learning and then impacting the field. This dynamic reminded me of what I had encountered when I organized my first hackathon, FixIT: the participants had great visions and energy for changing the world! But they also had limited skills to execute their ideas. They lacked data to determine if their ideas were actually something worth pursuing. To address these challenges, in the past, I gave hackathon participants bootcamps to ramp up their technical skills (this allowed them to execute some of their visions). We also taught these participants about human centered design to empower them to create artifacts and solutions that match people’s needs, and not a hammer in need of nails.

For CrowdCamp, we decided to do a similar thing:
We had a mini-bootcamp, organized by Toloka (a crowdsourcing platform), that explained how to design and create crowd-powered systems. The bootcamp started with a short introduction on what is crowdsourcing, common types of crowdsourcing projects (like image/text/audio/video classification) and interesting ones (like side-by-side comparison, data collection and spatial crowdsourcing). After that, the bootcamp introduced the Toloka platform and some of its unique features. Then the bootcamp briefly presented Toloka Python SDK (Toloka-Kit and Crowd-Kit) and moved to an example project on creating a crowd powered system, especially a face detection one. The code used in the Bootcamp is in the following Google Collab:
https://colab.research.google.com/drive/13xef9gG8T_HXd41scOo9en0wEZ8Kp1Sz?usp=sharing.

We taught human centered design, and had a panel with real world crowdworkers who shared their experiences and needs. The participants were empowered to design better crowdworkers and create more relevant technologies for them, as well as technologies that would better coordinate crowdworkers to produce higher quality work. The crowdworkers who participated in CrowdCamp all came from Africa, and they shared how crowd work had provided them with new job opportunities that were typically not available in their country. Crowd work helped to complement their expenses (a side job). They were motivated to participate in crowd work for the additional money received, also knowing that they were contributing to something bigger than themselves (e.g., labeling images that will ultimately help to power self-driving cars.) Some of the challenges these crowdworkers experienced included unpaid training sessions. It was unclear sometimes whether the training sessions were worth it or not. They also discussed the importance of building worker communities.

CrowdCamp ended up being a success with over 70 people who had registered and then created a number of different useful tools for crowdworkers. The event was hybrid with people on the east coast joining us at Northeastern university. We had delicious pizza and given that we were in Boston, delicious Dunkin Donuts 🙂

Chairing Professor Seth Cooper’s Keynote.

We had the honor of chairing the keynote of Professor Seth Cooper, an Associate Professor at the Khoury College of Computer Sciences. He previously worked for Pixar Animation Studios and Electronic Arts, a big game maker. Seth is also the recipient of an NSF career grant.
Professor Cooper’s research has focused on using video games and crowdsourcing techniques to solve difficult scientific problems. He is the co-creator and lead designer, as well as developer of Foldit, a scientific discovery game that allows regular citizens to advance the field of biochemistry. Overall, his research combines scientific discovery games (particularly in computational structural biochemistry), serious games, and crowdsourcing games. A pioneer in the field of scientific discovery games, Dr. Cooper has shown video game players are able to outperform purely computational methods for certain types of structural biochemistry problems, effectively codifying their strategies, and integrating them in the lab to help design real synthetic molecules. He has also developed techniques to adapt the difficulty of tasks to individual game players and generate game levels.

Seth’s talk discussed how he is using crowdsourcing to improve video games, and video games to improve crowdsourcing. What does this mean? In his research, Professor Cooper integrates crowd workers to help designers improve their video games. For example, he integrates crowds to help them test just how hard or easy the game they are creating is. It enables designers to identify how easy it is for gamers to advance within the different stages of a game. The integration of crowdworkers allows gamers to easily iterate and improve their video game. Dr. Cooper is also integrating gaming to improve crowdsourcing. In particular, he has studied how he can integrate games to improve the quality of work produced by crowd workers.

During the Questions of Professor Cooper, some interesting questions emerged:
What types of biases do crowdworkers bring to the table when co-designing video games? It was unclear whether crowdworkers are actually similar to how typical gamers would play a video game. Hence, the audience wondered just how much designers actually use the results of the way crowdworkers engage with a video game. Professor Seth mentioned that in his research, he found that crowdworkers are similar to typical gamers in playing games. A difference is that typical gamers (voluntarily playing a game instead of getting paid to play) will usually focus more on the aspects of the game they like the most. Crowdworkers will focus on exploring the whole game instead of focusing on particular parts (because of the role that the payments play). Perhaps, these crowdworkers feel that by exploring the whole game, they are better showcasing to the requester (designer) that they are indeed playing the game and not slacking off. Some people have a gaming style that focuses on the “catch-them all” approach (an exploratory mode). However, the “catch-them-all” term is used in reference to Pokemon, where people are interested in being able to explore the entire game and collect all the different elements (e.g., Pokemons).

How might we integrate game design to help crowdworkers learn? Dr. Flores-Saviaga posed an interesting question about the role games could play in facilitating the career development of these workers. Professor Seth expressed an interest in this area while also mentioning that you can imagine that workers instead of earning badges within the game could earn real certificates that translate into new job opportunities.

What gave him confidence that the gaming approach in crowdsourcing is worth pursuing? When Foldit came out, it was unclear that gaming would actually be useful for mobilizing citizen crowds to complete complex scientific tasks. The audience wanted to know what led him to explore this path. Professor Cooper explained that part of it was taking a risk down a path he was passionate about: gaming. I think for PhD students and other new researchers starting out, it can be important to trust your intuition and conduct research that personally interests you. In research, you will take risks, which makes it all the more exciting 🙂

Girl in a jacket

Dr. Jenn Wortman’s Keynote.

We greatly enjoyed the amazing keynote given by Dr. Jenn Wortman Vaughan (@jennwvaughan) at HCOMP 2022. She presented her research in Responsible AI, specially interpretability and fairness in AI systems.

A takeaway is that there are challenges in the design of interpretability tools for data scientists such as IntrepretML or SHAP Python package, where they found that these tools lead to over-trusting and misusing how ML models work. For more info, look at her CHI 2020 paper: “Interpreting Interpretability”

Dr. Jeffrey Bigham’s Keynote.

An incredible keynote given by Dr. Jeffrey Bigham at HCOMP 2022. He presented work developed in Image Description for 17 years! He showed different connections (loops) in finding the right problem and right solution in Image Description, such as Computer Vision, Real-Time Recruitment, Gig Workers, Conversations with the Crowd, Datasets, etc.

A takeaway is that there could be different interactions or loops in the process of Applying Machine Learning and HCI as seen in the image below from the Problem selection until the Deployment of the system.

Doctoral Consortium.

The HCOMP doctoral consortium was led by Dr. Chien-Ju Ho and Dr. Alex Williams. The consortium is an opportunity for PhD students to share their research with crowdsourcing and human computation experts. Students have the opportunity to meet other PhD students, industry experts, and researchers to expand their network and receive mentoring from both industry and academia. Our lab participated in the proposal “Organizing Crowds to Detect Manipulative Content.” A lab member, Claudia Flores-Saviaga, presented the research she has done in this space for her PhD thesis.

Exciting news for the HCOMP community!

The big news I want to share is that I have the honor of being the co-organizer of next year’s HCOMP! I will co-organize it with Alessandro Bozzon and Michael Bernstein. We are going to host the conference in Europe, and it will be united with the Collective Intelligence conference. Our theme is about reuniting and helping HCOMP grow in size by connecting with other fields, such as human centered design, citizen science, data visualizations, and serious games. I am excited to have the honor and opportunity to build the HCOMP conference.

Girl in a jacket


List of MIT Tech Review Inspiring Innovators

We are part of the amazing network of the 35 Innovators under 35 by the MIT Tech Review. We got invited to their EmTech Event and had amazing dinner with other innovators and people having an impact in the field. We are very thankful with Bryan Bryson for the invitation, and we also wanted to congratulate him and his team for all the work done to build such a vibrant innovation ecosystem.

I share below a list of some of the innovators I meet. Keep an eye on them and their research!

Setor Zilevu (Meta and Virginia Tech).

He is working at the intersection of human-computer interaction and machine learning to create semi-automated, in-home therapy for stroke patients. After his father suffered a stroke, Zilevu wanted to understand how to integrate those two fields in a way that would enable patients at home to get the same type of therapy, including high-quality feedback, that they might get in a hospital. The semi-­automated human-computer interaction, which Zilevu calls the “tacit computable empower” method, can be applied to other domains both within and outside health care, he says.

Sarah B. Nelson (Kyndryl)

She is Chief Design Officer and Distinguished Designer for Kyndryl Vital, Kyndryl’s designer-led co-creation experience. From the emergence of the web through the maturity of user experience practice, Sarah is known throughout the design industry as a thought leader in design-led organizational transformation, participatory, and forward-looking design capability development. At Kyndryl, she leads the design profession, partnering with technical strategists to integrate experience ecosystem thinking into the technical solutions. Sarah is an encaustic painter and passionate surfer.

Moses Namara (Meta and Clemson University).

Namara co-­created the Black in Artificial Intelligence graduate application mentoring program to help students applying to graduate school. The program, run through the resource group Black in AI, has mentored 400 applicants, 200 of whom have been accepted to competitive AI programs. It provides an array of resources: mentorship from current PhD students and professors, CV evaluations, and advice on where to apply. Namara now sees the mentorship system evolving to the next logical step: helping Black PhD and master’s students find that first job.

Joanne Jang (OpenAI)

Joanne Jang is the product lead of DALL·E, an AI system by OpenAI that creates original images and artwork from a natural language description. Joanne and her team were responsible for turning the DALL·E research into a tool people can use to extend their creative processes and for building safeguards to ensure the technology will be used responsibly. The DALL·E beta was introduced in July 2022 and now has more than 1 million users.

Daniel Salinas (Colombia: Super Plants)

Su ‘start-up’ monitoriza las plantas con nanotecnología al conectarlas con ordenadores y facilita la descarbonización. Los humanos tienen ‘ceguera a las plantas’. Nuestros sesgos nos impiden percibirlas como sí hacemos con los animales. Esta desconexión planta-humano lleva a que los proyectos de plantar árboles para capturar carbono frente a la crisis climática no sean sostenibles si la reforestación no se mantiene en el tiempo. El estudiante de Emprendimiento colombiano Daniel Salinas descubrió la falta de infraestructuras en la lucha para la descarbonización con una ‘start-up’ de plantación de árboles. El joven recuerda: “Cada vez que íbamos al terreno teníamos problemas”. Para romper esta desconexión entre personas y árboles, Salinas ha creado una interfaz planta-ordenador que permite hacer un seguimiento de la vegetación con su start-up Superplants. Con esta aportación, Salinas ha logrado ser uno de los Innovadores menores de 35 Latinoamérica 2022 de MIT Technology Review en español.



Relevant References:
-https://www.building-up.org/knowledgehub/innovadores-menores-de-35-latinoamrica-2022
-https://event.technologyreview.com/emtech-mit-2022/speakers
-https://www.technologyreview.com/innovator/setor-zilevu/


Fighting online trolls with bots

An animated troll typing on a laptop

Reposted from the conversation: https://theconversation.com/fighting-online-trolls-with-bots-70941

The wonder of internet connectivity can turn into a horror show if the people who use online platforms decide that instead of connecting and communicating, they want to mock, insult, abuse, harass and even threaten each other. In online communities since at least the early 1990s, this has been called “trolling.” More recently it has been called cyberbullying. It happens on many different websites and social media systems. Users have been fighting back for a while, and now the owners and managers of those online services are joining in.

The most recent addition to this effort comes from Twitch, one of a few increasingly popular platforms that allow gamers to play video games, stream their gameplay live online and type back and forth with people who want to watch them play. Players do this to show off their prowess (and in some cases make money). Game fans do this for entertainment or to learn new tips and tricks that can improve their own play.

a screenshot of a video game that shows advice from spectators

When spectators get involved, they can help a player out. Saiph Savage, CC BY-ND

Large, diverse groups of people engaging with each other online can yield interesting cooperation. For example, in one video game I helped build, people watching a stream could make comments that would actually give the player help, like slowing down or attacking enemies. But of the thousands of people tuning in daily to watch gamer Sebastian “Forsen” Fors play, for instance, at least some try to overwhelm or hijack the chat away from the subject of the game itself. This can be a mere nuisance, but can also become a serious problem, with racism, sexism and other prejudices coming to the fore in toxic and abusive comment threads.

In an effort to help its users fight trolling, Twitch has developed bots – software programs that can run automatically on its platform – to monitor discussions in its chats. At present, Twitch’s bots alert the game’s host, called the streamer, that someone has posted an offensive word. The streamer can then decide what action to take, such as blocking the user from the channel.

Trolls can share pornographic images in a chat channel, instead of having conversations about the game. Chelly Con Carne/YouTube, CC BY-ND

an example of images shared by trolls

Beyond just helping individual streamers manage their audiences’ behavior, this approach may be able to capitalize on the fact that online bots can help change people’s behavior, as my own research has documented. For instance, a bot could approach people using racist language, question them about being racist and suggest other forms of interaction to change how people interact with others.

Using bots to affect humans

In 2015 I was part of a team that created a system that uses Twitter bots to do the activist work of recruiting humans to do social good for their community. We called it Botivist.

We used Botivist in an experiment to find out whether bots could recruit and make people contribute ideas about tackling corruption instead of just complaining about corruption. We set up the system to watch Twitter for people complaining about corruption in Latin America, identifying the keywords “corrupcion” and “impunidad,” the Spanish words for “corruption” and “impunity.”

When it noticed relevant tweets, Botivist would tweet in reply, asking questions like “How do we fight corruption in our cities?” and “What should we change personally to fight corruption?” Then it waited to see if the people replied, and what they said. Of those who engaged, Botivist asked follow-up questions and asked them to volunteer to help fight the problem they were complaining about.

We found that Botivist was able to encourage people to go beyond simply complaining about corruption, pushing them to offer ideas and engage with others sharing their concerns. Bots could change people’s behavior! However, we also found that some individuals began debating whether – and how – bots should be involved in activism. But it nevertheless suggests that people who were comfortable engaging with bots online could be mobilized to work toward a solution, rather than just complaining about it.

Humans’ reactions to bots’ interventions matter, and inform how we design bots and what we tell them to do. In research at New York University in 2016, doctoral student Kevin Munger used Twitter bots to engage with people expressing racist views online. Calling out Twitter users for racist behavior ended up reducing those users’ racist communications over time – if the bot doing the chastising appeared to be a white man with a large number of followers, two factors that conferred social status and power. If the bot had relatively few followers or was a black man, its interventions were not measurably successful.

Raising additional questions

Bots’ abilities to affect how people act toward each other online brings up important issues our society needs to address. A key question is: What types of behaviors should bots encourage or discourage?

It’s relatively benign for bots to notify humans about specifically hateful or dangerous words – and let the humans decide what to do about it. Twitch lets streamers decide for themselves whether they want to use the bots, as well as what (if anything) to do if the bot alerts them to a problem. Users’ decisions not to use the bots include both technological factors and concerns about comments. In conversations I have seen among Twitch streamers, some have described disabling them for causing interference with browser add-ons they already use to manage their audience chat space. Other streamers have disabled the bots because they feel bots hinder audience participation.

But it could be alarming if we ask bots to influence people’s free expression of genuine feelings or thoughts. Should bots monitor language use on all online platforms? What should these “bot police” look out for? How should the bots – which is to say, how should the people who design the bots – handle those Twitch streamers who appear to enjoy engaging with trolls?

One Twitch streamer posted a positive view of trolls on Reddit:

“…lmfao! Trolls make it interesting […] I sometimes troll back if I’m in a really good mood […] I get similar comments all of the time…sometimes I laugh hysterically and lose focus because I’m tickled…”

Other streamers even enjoy sharing their witty replies to trolls:

“…My favorite was someone telling me in Rocket League “I hope every one of your followers unfollows you after that match.” My response was “My mom would never do that!” Lol…”

What about streamers who actually want to make racist or sexist comments to their audiences? What if their audiences respond positively to those remarks? Should a bot monitor a player’s behavior on his own channel against standards set by someone else, such as the platform’s administrators? And what language should the bots watch for – racism, perhaps, but what about ideas that are merely unpopular, rather than socially damaging?

At present, we don’t have ways of thinking about, talking about or deciding on these balancing acts of freedom of expression and association online. In the offline world, people are free to say racist things to willing audiences, but suffer social consequences if they do so around people who object. As bots become more able to participate in, and exert influence on, our human interactions, we’ll need to decide who sets the standards and how, as well as who enforces them, in online communities.

 

¿Cómo aplicar la inteligencia artificial para eficientar los trámites de gobierno?

a graphic showing a brain on top of a desk surrounded by computers

Reposted from the blog of the Inter-American Development Bank: https://blogs.iadb.org/conocimiento-abierto/es/inteligencia-artificial-y-gobierno/

El gran reto de la implementación de tecnologías de inteligencia artificial (IA) en contextos gubernamentales, es encontrar la manera de hacerlas cada vez más user friendly y más confiables, así como de integrar en ellas una perspectiva de bienestar social digital.

En este artículo compartiremos algunas pautas a tomar en cuenta cuando utilizamos la inteligencia artificial para agilizar trámites gubernamentales. Utilizaremos como ejemplo nuestra más reciente experiencia durante la creación de una nueva plataforma para el trámite del pasaporte mexicano. Nuestra misión como Laboratorio de Innovación Cívica perteneciente a la Universidad Autónoma de México (UNAM), y como Equipo de Nuevas Tecnologías de la Secretaría de Relaciones Exteriores (SRE) en México, es crear innovaciones que faciliten que los y las ciudadanas se acerquen a su gobierno y se familiaricen con sus procesos.

1. Conceptualizar el problema y buscar soluciones apropiadas

Entendimos que lo más relevante en este proyecto era comprender y responder de manera clara y eficiente a las consultas que harían los usuarios de este servicio, por lo que decidimos trabajar en el desarrollo de asistentes virtuales inteligentes (que son programas de computadora que utilizan la IA), ya que estos cuentan con dos cualidades concretas:

  • Ofrecen a las y los usuarios la oportunidad de interactuar desde su dispositivo móvil sin necesidad de recurrir a una computadora.
  • Permiten que los trámites se hagan mediante mensajes de texto sin tener que realizar una llamada telefónica.

2. Entender el contexto social de la población usuario

Identificar la tecnología que mejor responda al problema es importante, pero también hay que estudiar de qué manera la IA va a interactuar con sus usuarios y usuarias. Si bien es cierto que gran parte de los asistentes virtuales están diseñados para dar respuestas genéricas, educar a estos asistentes sobre el contexto social y por ende sobre las expectativas de los ciudadanos y las ciudadanas al utilizar estas tecnologías, podría traducirse en tasas de uso más altas.

En este caso, para diseñar nuestros asistentes virtuales tomamos como referencia el modelo 6-D de Geert Hofstede. En las siguientes viñetas resaltamos algunas de las características que, de acuerdo con este modelo, sobresalen en la cultura mexicana, y explicamos cómo abordamos estos retos en la construcción de nuestros asistentes virtuales:

  • En la cultura mexicana se valoran las relaciones interpersonales: procuramos fortalecer la conexión con los ciudadanos y las ciudadanas al implementar un lenguaje amigable, que incluye stickers y emojis, con la idea de que se propicie cercanía y cordialidad en cada interacción con el asistente virtual.
  • En la cultura mexicana se siente frustración cuando un proceso produce un resultado incierto: se buscó que el flujo de conversación fuera claro y conciso. Para esto usamos Botpress, para crear sistemas basados en reglas de tipo “si entonces” (if-then), ej. “si el ciudadano/la ciudadana tiene la intención de renovar su pasaporte y pide ayuda para este trámite, entonces muéstrale las instrucciones para que pueda completar ese trámite”.  Este tipo de inteligencia artificial tiene salidas predefinidas, en este caso, las salidas son tipos de trámites gubernamentales que se muestran con base en las reglas definidas por los expertos, que a su vez son los que mapean las diferentes intenciones de los ciudadanos y las ciudadanas; todo esto con la idea de propiciar certidumbre.
  • La cultura mexicana es policrónica (se llevan a cabo varias tareas al mismo tiempo): hicimos que nuestros asistentes virtuales guiaran puntualmente y recordaran a las y los usuarios qué hacer para terminar el trámite en cuestión. Resaltamos el hecho de que se puede acceder a nuestros asistentes virtuales desde cualquier celular con acceso a internet, lo que facilita que se realicen los trámites a la par de otras actividades cotidianas.

3. Trabajar en un diseño que sea representativo

Una vez llevado a cabo el trabajo de conceptualización y contextualización, hay que pensar en como representar a nuestro proyecto de tal manera que incentive a los usuarios y las usuarias a utilizar la herramienta.  Esta es la parte final de nuestro proceso previo a la implementación de esta tecnología, y para diseñar la imagen de dos asistentes virtuales seguimos dos líneas de trabajo:

3.1 Sesiones con diseñadores y ciudadanía

Realizamos entrevistas y sesiones de diseño con la ciudadanía para imaginar, junto con el equipo, cómo debería de verse el asistente virtual. Al terminar las entrevistas hicimos un análisis cualitativo con las ideas compartidas y desarrollamos los siguientes gráficos:

icons that show choices labeled letter A through F

Con estas opciones, nuestro siguiente paso fue hacer una encuesta para determinar la imagen más popular en una muestra representativa de la población.

a bar graph showing results for virtual assistant images

La imagen del pasaporte resultó la favorita, especialmente porque su conexión con los trámites del gobierno es inmediata y mantiene afinidad con el país.

3.2 Sesiones con escritores de ficción

Para el diseño de nuestro segundo asistente recurrimos a reconocidos escritores de ficción para asesorarnos sobre cómo debería de lucir la imagen del asistente virtual. Ellos, acordaron que debería de ser un tlacomiztli, ya que es un animal tradicional mexicano que conecta con los pueblos nativos. El siguiente paso fue nombrar a nuestro nuevo asistente virtual, quien se llama Mixtli, y quien tiene una apariencia de intelectual y va vestido de manera formal para mostrar que trabaja en un puesto de gobierno.

Al final de estos dos ejercicios, ¡nuestros asistentes tienen un rostro!

a graphic showing a raccoon sitting at a desk and an animated character holding a laptop

4. Crear valor para la ciudadanía y también para los funcionarios y las funcionarias de gobierno

Nuestros asistentes virtuales se pensaron no solamente para guiar a la ciudadanía en el trámite de sus pasaportes, sino también para apoyar y aligerar, en la medida de lo posible, la carga de trabajo de los funcionarios y las funcionarias de gobierno involucrados en estas gestiones.

Con el propósito de evitar la repetición mecánica y ayudar a los trabajadores a concentrarse en tareas que requieren su conocimiento especializado, diseñamos mecanismos computacionales con los cuales se delega al asistente virtual las tareas repetitivas, como por ejemplo responder dudas sobre la hora de cierre y apertura del consulado. Esos mismos mecanismos permiten que los funcionarios y las funcionarias puedan integrarse a la conversación si es que el o la ciudadana se topa con situaciones cuya solución requiera del juicio humano.

Algunas conclusiones preliminares

Nuestros asistentes virtuales ya están siendo implementados bajo un tipo de prueba A/B (A/B testing) que nos permite entender cómo interactúan los y las ciudadanas con ellos. Un primer monitoreo de su rendimiento sugiere que ambos asistentes se complementan, ya que Mixtli, el asistente virtual creado por los escritores de ficción, parece despertar curiosidad entre la ciudadanía por el trabajo que desarrolla la SRE; lo que indica que puede ser prometedor ubicarlo como el asistente que responda a las dudas de trámites generales. Por su parte, las interacciones con el avatar del pasaporte sugieren que este asistente sería más funcional cuando se trate de guiar específicamente en el proceso del trámite del pasaporte. Subrayamos que estas son conclusiones tempranas y que seguimos monitoreando el impacto de este proyecto.

Entendemos que todavía queda un camino para modernizar y seguir agilizando, por medio de la implementación de tecnologías de IA, los trámites gubernamentales en México. Aún así, consideramos que cada esfuerzo y cada proyecto, nos acerca más a esa meta y nuestros equipos están comprometidos a seguir desarrollando las herramientas necesarias para que la ciudadanía y los funcionarios y las funcionarias de gobierno puedan realizar sus trámites y gestiones de la mejor manera posible.

Si te pareció interesante nuestra descripción de cómo implementamos la AI para agilizar el trámite del pasaporte en México, te invitamos a leer este breve tutorial de Botpress, la plataforma que utilizaron nuestros equipos para el desarrollo de los asistentes virtuales. En la página también puedes bajar el software en su formato open source y comenzar a experimentar en él, quizá inclusive te animes a aprovechar esta tecnología para desarrollar soluciones útiles en tu comunidad.

 

Savvy social media strategies boost anti-establishment political wins

a politician speaks at a podium

September 12, 2018 11.52am BST Mexican President-elect Andrés Manuel López Obrador. AP Photo/Marco Ugarte

Reposted from the conversation. https://theconversation.com/savvy-social-media-strategies-boost-anti-establishment-political-wins-98670

By Saiph Savage, Claudia Flores-Saviaga

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Mexico’s anti-establishment presidential candidate, Andrés Manuel López Obrador, faced opposition from the mainstream media. And he spent 13 percent less on advertising than his opponents. Yet the man commonly known by his initials as “AMLO” went on to win the Mexican presidency in a landslide with over 53 percent of the vote in a four-way race in July.

That remarkable victory was at least partly due to the social media strategies of the political activists who backed him. Similar strategies appeared in the 2016 U.S. presidential election and the 2017 French presidential race.

Our lab has been analyzing these social media activities to understand how they’ve worked to threaten – and topple – establishment candidates. By analyzing more than 6 million posts from Reddit, Facebook and Twitter, we identified three main online strategies: using activist slang, attempting to “go viral” and providing historical context.

Redditors’ responses to information strategies

In our study of activity in a key Trump-supporting subreddit, citizens tended to engage most with posts that explained the political ecosystem to them, commenting more on them and giving more upvotes of support.

a chart measuring engagement of reddit posts

Some of these strategies might simply be online adaptations of long-standing strategies used in traditional offline campaigning. But others seem to be new ways of connecting and driving people to the polls. Our lab was interested in understanding the dynamics behind these online activists in greater detail, especially as some had crossed over from being merely supporters – even anonymous ones – not formally affiliated with campaigns, to being officially incorporated in campaign teams.

Integrating activist slang

Some political activists pointedly used slang in their online conversations, creating a dynamic that elevated their candidate as an opponent of the status quo. Trump backers, for instance, called themselves “Deplorables,” supporting “the God Emperor” Trump against “Killary” Clinton.

AMLO backers called themselves “AMLOVERS” or “Chairos,” and had nicknames for his opponents, such as calling the other presidential candidate, Ricardo Anaya, “Ricky Riquin Canayin” – Spanish for “The Despicable Richy Rich.”

Efforts to ‘go viral’

Some political activists worked hard to identify the material that was most likely to attract wide attention online and get media coverage. Trump backers, for instance, organized on the Discord chat service and Reddit forums to see which variations of edited images of Hillary Clinton were most likely to get shared and go viral. They became so good at getting attention for their posts that Reddit actually changed its algorithm to stop Trump backers from filling up the site’s front page with pro-Trump propaganda.

Similarly, AMLO backers were able to keep pro-AMLO hashtags trending on Twitter, such as #AMLOmania, in which people across Mexico made promises of what they would do for the country if AMLO won. The vows ranged from free beer and food in restaurants to free legal advice.

For instance, an artist promised to paint an entire rural school in Veracruz, Mexico, if AMLO won. A law firm promised to waive its fees for 100 divorces and alimony lawsuits if AMLO won. The goal of citizen activists was to motivate others to support AMLO, while doing positive things for their country.

The historian-style activists

examples of explanatory materials shared on social media

Historian-style activists created explanatory materials to share on social media: a) backing AMLO with a visual description of his economic plan; b) Helping Trump backers ‘red-pill liberals,’ waking them up to a conservative reality. Saiph Savage and Claudia Flores-Saviaga, CC BY-ND

Some anti-establishment activists were able to recruit more supporters by providing detailed explanations of the political system as they saw it. Trump backers, for instance, created electronic manuals advising supporters how to explain their viewpoint to opponents to get them to switch sides. They compiled the top WikiLeaks revelations about Hillary Clinton, assembled explanations of what they meant and asked people to share it.

Pro-AMLO activists did even more, creating a manual that explained Mexico’s current economics and how the proposals of their candidate would, in their view, transform and improve Mexico’s economy.

Our analysis identified that one of the most effective strategies was taking time to explain the sociopolitical context. Citizens responded well to, and engaged with, specific reasoning about why they should back specific candidates.

As the U.S. midterm elections approach, it’s worth paying attention to whether – and in what races – these methods reappear; and even how people might use them to engage in fruitful political activism that brings the changes they want to see. You can read more about our research in our new ICWSM paper.

 

Activist Bots: Helpful But Missing Human Love?

Saiph Savage Nov 29, 2015

an image showing police trying to contain protestors

Activist group who helped us to design Botivist, a system for coordinating activist bots that call people to action to recruit them for activism.

Political bots are everywhere, swamping online conversations to promote a political candidate, sometimes even engaging in swiftboating…But, instead of continuing to build more political bots, what about creating bots for people, e.g., activists? What do bots for social change look like?

It might help us to first think about when might activists need bots?

Activists can suffer extreme dangers, including being murdered. Given that bots can remove responsibility from humans, we could think of designing bots that execute and take responsibility for tasks that are dangerous for human activists to do. At the end of the day, what happens if you kill a bot?

Our interviews with activists have also highlighted that activists have to spend excessive time in recruitment, i.e., trying to convince people to join their cause. While obtaining new members is crucial to the long term survival of any activist groups, activists spend sometimes excessive time trying to convince people who at the end might never participate. Plus, it can be hard for humans to test and rapidly figure out what recruitment campaigns work best: is it better to have a solidarity campaign that reminds individuals of the importance of helping each other? Or is it more effective to just be upfront and directly ask for participation? The automation aspect of bots mean that we could use them to massively probe different recruitment campaigns, and not have humans spend too much time in these tedious tasks.

These ideas about how task automation could help activists lead us to design Botivist, a system that uses online bots to recruit humans for activism. The bots also allow activists to easily probe different recruit strategies.

Overview of Botivist, a system that automates the recruitment of people for activism and allows activists to try different recruitment strategies.

We conducted a study on Botivist to understand the feasibility of using bots to convince people to participate in activism. In particular, we studied whether bots could recruit and make people contribute ideas about tackling corruption. We found that over 80 per cent of the calls to action made by Botivist’s automated activists received at least one response. However, we also found that the strategy the bots used did matter. We were surprised to discover that strategies that work well face-to-face were less effective when used by bots. Messages effective when done by humans resulted sometimes in circular discussions where people questioned whether bots should be involved in activism. Persuasive strategies resulted in general in less responses from people.

Number of volunteers and responses that each strategy triggered. When the bots were upfront and direct they recruited the most participants and prompted the most responses.

The individuals who decided to collaborate with Botivist were individuals already involved in online activism and marketing. They mentioned hashtags and Twitter accounts related to social causes and marketing analytics. It is likely that people linked Botivist to online marketing schemes. Therefore, those who responded to Botivist were the ones who in their communities already engage with such marketing agents, it was perhaps more natural for them.

To design bots for activists, it is necessary to understand first the communities in which the bots are being deployed. If we want to design bots that can take on some of the more dangerous activities of human activists, we have to first understand how people react when an automated agent conducts now the task. Will it be as effective as when done by a human? Many activists who endanger their lives making timely reports of terrorists or organized criminals are usually very empathic, caring, and have great solidarity with their public. Will it matter when these task are now done by an automated agent who by nature cannot care?

To read more about our system Botivist, checkout our CSCW 2016 research paper:Botivist: Calling Volunteers to Action Using Online Bots,

with Saiph Savage, Andres Monroy-Hernandez, Tobias Hollerer.

Points/talking bots: “Activist Bots: Helpful But Missing Human Love?” is a contribution to a weeklong workshop at Data & Society that was led by “Provocateur-in-Residence” Sam Woolley and brought together a group of experts to get a better grip on the questions that bots raise. More posts from workshop participants talking bots:

 

‘Making Europe Great Again,’ Trump’s online supporters shift attention to the French election

a graphic showing the Eiffel Tower, French flag and frogs

The online movement that played a key role in getting Donald Trump elected president of the United States has begun to spread its political influence globally, starting with crossing the Atlantic Ocean. Among several key elections happening in 2017 around Europe, few are as hotly contested as the race to become the next president of France. Having helped install their man in the White House in D.C., a group of online activists is now trying to get their far-right woman, Marine Le Pen, into the Élysée Palace in Paris.

A French adaptation of a common Trump-backers’ meme: Pepe the Frog as Marine Le Pen.

A French adaptation of a common Trump-backers’ meme: Pepe the Frog as Marine Le Pen. LitteralyPepe/reddit

In 2016, a group of online activists some might call trolls — people who engage online with the specific intent of causing trouble for others — joined forces on internet comment boards like 4chan and Reddit to promote Donald Trump’s candidacy for the White House. These online rebels embraced Trump’s conscious efforts to disrupt mainstream media coverage, normal politics and public discourse. His anti-establishment message resonated with the internet’s underground communities and inspired their members to act.

The effects of their collective work, for the media, the public and indeed the country, are still unfolding. But many of the same individuals who played important roles in the online effort for Trump are turning their attention to politics elsewhere. Their goal, one participant told Buzzfeed, is “to get far right, pro-Russian politicians elected worldwide,” perhaps with a secondary goal of heightening Western conflict with Muslim countries.

Our research has focused on studying political actors and citizen participation on social media. We used our experience to analyze 16 million comments on five separate Reddit boards (called “subreddits”). Our analysis suggests that some of the same people who played significant roles in a key pro-Trump subreddit are sharing their experience with their French counterparts, who support the nationalist anti-immigrant candidate Le Pen.

Finding Trump backers active in European efforts 

The so-called “alt-right” movement, an offshoot of conservatism mixing racism, white nationalism and populism, is fed in part by online trolls, who use 4chan message boards and the Discord messaging app to create thousands of memes — images combining photographs and text commentary — related to political causes they want to promote.

As Buzzfeed reported, they test political images on Reddit to see which get the most attention and biggest reactions, before sending them out into the wider world on Facebook, Twitter and other social media platforms. However, we weren’t clear about how much this actually happened.

We set out to quantify exactly what was happening, how often, and how many people were involved. We started with the subreddit “The_Donald,” one of the largest pro-Trump hubs, and analyzed the activity of every Reddit username that had ever commented in that subreddit from its start in 2015 until February 2017. We looked specifically for those same usernames’ appearances in European-related “sister subreddits” — as recognized by “The_Donald” users themselves.

a bar graph showing activity of The_Donald subreddit participants

We found that of the more than 380,000 active Reddit users in “The_Donald,” over 3,000 of them had indeed participated in one or more of the “sister subreddits” supporting right-wing candidates in European elections, “Le_Pen,” “The_Europe,” “The_Hofer” and “The_Wilders.” The first two had the most involvement from people also active in “The_Donald.” This is admittedly a small percentage of participants, but it shows that there is overlap, and that the knowledge and techniques used to support Trump are making their way to Europe.

What are they up to?

Next we looked at how involved these Trump-supporting users were in the European right-wing discussions, based on how many comments a user made in any of the subreddits. Most users were moderately active, as might be expected of casual users exploring issues of personal interest. But we identified several accounts with behavior that suggested they might be actively organizing ultra-right collective action in the U.S. and Europe.

Activity of users involved in The_Donald and at least one European right-wing subreddit

There were three types of these users: People who were actively involved in European efforts, bots making automated posts and people concerned about global influences.

The activists

Two outlier accounts in particular were what we called “Ultra-Right Activists.” They commented heavily on “The_Donald” and the four European subreddits — one of the outliers had more than 2,500 comments in “The_Donald” and over 1,000 comments in the “Le_Pen.” The other outlier had over 1,000 comments on “The_Donald” and over 1,000 across the European subreddits.

These accounts actively called people to action in both the U.S. and European subreddits. For instance, one post in “Le_Pen” recruited people to make memes: “Participate in the Discord chat to help us make memes.” Another post sought to organize Americans and Europeans to work together to create propaganda that would be effective in France: “We still have to explain to the Anglos some things about French politics and candidates so that they can understand. We must translate/transpose into the French context the memes that worked well in the U.S.”

We also found plans of flooding Facebook and Twitter with ultra-right content: “Yep, the media call them ‘la fachosphère’ (because we’re obviously literal fascists, right), and it dominates Twitter. That’s a great potential we have there. Soon I’m making an official account to retweet all the subs’ best content to them and make it spread.”

Not all of their efforts were necessarily successful. For example, an effort to transfer Trump’s main campaign slogan to Europe never really got going.

One comment we found on “The_Donald” appeared to lay out a game plan: “PHASE 1: MAGA (Make America Great Again) PHASE 2: MEGA(Make Europe Great Again).” Another sounded a similar theme: “Once we get the ball rolling here we will Make Europe Great Again. Steve Bannon has already been deployed to help Marine Le Pen, we haven’t forgotten about Europe.”

But we found only 210 comments mentioning “Make Europe Great Again” across the four European subreddits. While people on “The_Donald” seemed excited about spreading the phrase, Europeans didn’t go for it. Maybe the fact that the phrase is in English didn’t click well with Europeans.

The bots

This group involved accounts who were moderately involved in both “The_Donald” and the European subreddits. While many of them were undoubtedly real people, some accounts in this group behaved like bots, posting the same comment repeatedly, or even including the word “bot” in their account names.

Just as we don’t know the real identities or locations of the humans who posted, it’s not clear who might have been running the bots, or why. But these bot-type messages were posted in both The_Donald and the European subreddits. They seemed to be used as a way to create silly or fun collaborations between Americans and Europeans, and to spread an ultra-right-wing view of certain world events.

Some of the words most commonly used by people in this group were “news,” “fake” and “CNN.” People seemed to use those words to criticize traditional news media coverage of the ultra-right. However, some people also commented about possibly manipulating the big news channels to get coverage for Le Pen similar to Trump’s strategies:

“So we must get Le Pens (sic) name in the news every damn day. Just the (sic) like the MSM [mainstream media] couldn’t ignore Donald here, they will have to give her air time which will help her reach the disenfranchised.”

The anti-globalists

A third group of accounts were highly active on “The_Donald” but far less so on the European subreddits we examined. When they did join the European discussions, it was usually to discuss how European and U.S. liberals were around the globe ruining everyone and everything. People in this cluster appeared to participate in the European subreddits primarily to emphasize the potentially negative actions that liberals had orchestrated.

With the French election still weeks away, any effects these people might be having remain unclear. But it’s worth watching, and seeing where these activists turn their attention next.

 

“Countering Fake News in Natural Disasters via Bots and Citizen Crowds”

By Tonmona Roy

a graphic titled 'Countering Fake News During Natural Disasters'

On September 20, 2017, Mexico City was hit with a 7.1 magnitude earthquake, killing hundreds of people. The death toll started rising quickly, with people trapped under the debris of the fallen buildings. When there is a catastrophe of this magnitude, it is hard for the government to quickly assist everyone.  Many started using social media to spread news about trapped people and supplies needed. Among the social media platforms, Twitter became the main site for exchanging information and mobilizing citizens for action. People started using hashtags to learn about what was happening in their neighborhoods and direct actions they could take to help. Some of the most popular hashtags used were #AquiNecesitamos (#HereWeNeed), #Verificado19S (#Verified19S, [19S represents September 19th, the day of the earthquake]). With these hashtags people started to post what they needed and where to deliver it.

However, misinformation started spreading. Some citizens, e.g., started tweeting and calling for help for a doctor allegedly trapped in a building.

But Dr. Elena Orozco, her friends and family all suddenly started reporting on social media:

“…Elena Orozco is not trapped in any building. She is right here with us. She was trying to rescue her co-workers, who were the ones trapped in the building. We are actually still missing Erik Gaona Garnica who decided to go back into the building to get his computer…”

Systems for Countering-Fake News Stories

Given that Fake News was critically affecting the rescue and well being of people we decided to do something about it. We quickly realized that Codeando Mexico (a social good startup) and universities across Mexico, such as UNAM, were organizing crowds of citizens to build civic media to help the earthquake. Our research lab (The HCI lab at West Virginia University) thus decided to unite forces and in a weekend we had rapidly built together a large scale system to counter fake news and bring verified news about the earthquake.

This led us to decide to bootstrap on existing social networks of people to solve the cold-start problem. Through our investigations, we identified that citizens had put together a Google Spreadsheet where they were posting news reports about the earthquake that were 100% verified (they had a group of people on the ground who actively verified each news report.) The group would then manually post on their social media accounts the verified news from the spreadsheet. But, as the group became more popular, it was hard for the volunteers to spend more time on it and coordinate.

Bootstrapping Bots on Networks of Volunteers

Our second design focuses on automating some of the critical bottlenecks that these networks of volunteers experienced when verifying news. In our interviews, we identified that it was difficult for volunteers to differentiate fake and real news because it involved gathering all of the facts behind the story; and it was also a pain to share on social media the news. Our second platform therefore introduced the idea of leveraging citizen crowds and bots (such as our bot @FakeSismo). Bots help in the verification of news by gathering facts and then massively sharing the verified news stories on social media, along with an automatically generated image macro that helps to give more visibility to the story. In this way, human volunteers can focus more on verifying the information. The work flow of our system is as follows:

a graphic that shows the system's workflow: 1. Volunteers report news. 2. The network of volunteers verify the news reports. 3. Bots take the verified reports and distribute on social media and with influencers who can decide what to post.

Bots in Action

To test out our bot, it started by tweeting verified information about the resources needed and it got very good responses. The bot currently has 176 followers and it’s increasing. As an example, the bot posted a news report about needing certain resources and someone started engaging with the bot, saying they had a refrigerator to give away. The bot focused on distributing the information and connecting the citizen who could use the refrigerator.

We also saw that citizens tried to actively verify news reports along with the bot.

In short, our bot is working together with a group of enthusiastic volunteers and helping in gathering and distributing verified information. As we test out more of the bot, we hope to connect with a larger mass of people to start a platform that can counter-fake news during natural disasters.

Social Media, Civic Engagement, and the Slacktivism Hypothesis: Lessons from Mexico’s “El Bronco”

Does social media use have a positive or negative impact on civic engagement? The cynical “slacktivism hypothesis” holds that if citizens use social media for political conversation, those conversations will be fleeting and vapid. Most attempts to answer this question involve public opinion data from the United States, so we offer an examination of an important case from Mexico, where an independent candidate used social media to communicate with the public and eschewed traditional media outlets. He won the race for state governor, defeating candidates from traditional parties and triggering sustained public engagement well beyond election day. In our investigation, we analyze over 750,000 posts, comments, and replies over three years of conversations on the public Facebook page of  “El Bronco.” We analyze how rhythms of political communication between the candidate and users evolved over time and demonstrate that social media can be used to sustain a large quantity of civic exchanges about public life well beyond a particular political event.

Read more about our research: here

Spanish Version of Paper: here

a graphic that shows a politician celebrating victory

Visualizing Targeted Audiences

a graphic that shows the distributions of social queries

Users of social networks can be passionate about sharing their political convictions, art projects or business ventures. They often want to direct their social interactions to certain people in order to start collaborations or to raise awareness about issues they support. However, users generally have scattered, unstructured information about the characteristics of their audiences, making it difficult for them to deliver the right messages or interactions to the right people. Existing audience-targeting tools allow people to select potential candidates based on predefined lists, but the tools provide few insights about whether or not these people would be appropriate for a specific type of communication. We have introduced an online tool, \textit{Hax}, to explore the idea of using interactive data visualizations to help people dynamically identify audiences for their different sharing efforts. We are providing the results of a preliminary empirical evaluation that shows the strength of the idea and points to areas for future research.