Welcome to our blog, where we dive into captivating news articles and share exciting mini blog reports about our groundbreaking research and the events surrounding it. We’re thrilled to have a team of talented undergraduate and high school interns who are actively involved in curating and managing this platform, allowing us to connect with a wider audience. Get ready to embark on a journey of knowledge as we share our research findings and insights. Sit back, relax, and enjoy the fascinating world of science and innovation!

Jump to one of the news articles below.


Into the World of AI for Good:

Reflections on My First Week in the Civic AI Lab

By: Undergraduate Researcher Liz Maylin from Wesley College.

Last Tuesday, I began my journey in the Civic A.I. Lab at Northeastern University with a mix of excitement, curiosity, and gratitude for the experience. A special thanks to Professor Eni Mustafaraj (Wellesley College) and Dr. Saiph Savage (Northeastern) for this opportunity. I embarked my first week in this transformative space, eager to learn and contribute to the lab, which studies problems involving people, worker collectives, and non-profit organizations to create systems with human centered designs to address these problems. Some of the objectives of the lab include fighting against disinformation and creating tools in collaboration with gig workers. Previous projects include designing tools for latina gig workers, systems for addressing data voids on social media, and a system for quantifying the invisible labor of crowd workers. Nestled at the crossroads of Human-Computer Interaction, Artificial Intelligence, and civic engagement, the research of this lab is thoughtful and resoundingly impactful.
I am honored to join a project that will support workers in their collective bargaining efforts.

First Impressions at Northeastern

During my first days, the differences between Wellesley College and Northeastern stood out to me the most. Wellesley College is located 12 miles outside of Boston in the extremely quiet, wealthy town of Wellesley whereas Northeastern is located directly in the city, allowing for greater access and a larger community. I get to walk past Fenway Park, various restaurants, boba shops, a beautiful park, and the Museum of Fine Arts on my commute! It is definitely a change of setting but I am happy for the experience. So much to explore!
First weeks are exciting because there is so much to learn and new people to meet. The research team is full of amazing, talented students that I am excited to collaborate with and learn from. As I prepare for the application process for graduate school, I am fortunate to gain insight into the lives of PhD and Master’s students that will help me make informed decisions for my own academic journey. Everyone has been very welcoming and helpful, I am thrilled to spend this summer with them.

Exploring Gig Work and Participatory Design

I have spent most of my first week getting familiar with gig work and participatory design through literature review. Gig work is a type of employment arrangement where individuals perform short-term jobs or tasks. This work includes independent contractors, freelancers, and project based work. Often, gig work is presented as the opportunity to “be your own boss” and “ to work on your own time”, however this line of work comes with challenges such as irregular income, limited job security, and typically no benefits. The use of digital platforms has facilitated connection between workers and employers; however, there is room for improvement that will benefit both users and platforms. Participatory design is a method that includes stakeholders and end-users in the process of designing technologies with the goal of creating useful tools or improving existing ones. For example, researchers at the University of Texas at Austin held sessions with drivers from Uber and Lyft to reimagine a design of the platform that would center their well-being. It’s fascinating work that unveils different solutions and possibilities capable of reconciling stakeholder and worker issues.

Learning about Data Visualization

Additionally, I have been getting acquainted with different forms of data visualization. I have some experience programming with python but usually for problem sets or web scraping so I was filled with anticipation to acquire a new skill. Specifically, I have focused on working on text analysis. With the help of tutorials, google, and Viraj from our lab, I was able to make a wordcloud that showed the most frequent words in a dataset that included reviews of women’s clothes from 2019 (shown below).

Through this process, I was able to learn about various resources such as Kaggle and datacamp that provide datasets and tutorials to practice working with data. I originally tried using the NLTK library but I had several problems with my IDE (VSCode). With some troubleshooting help from lab members, I switched my approach to just using pandas, matplotlib, and wordcloud. I am happy I got it working and I’m looking forward to refining this skill.
As I wrap my first week, I am beyond excited for the opportunities that lie ahead. This experience has ignited a passion for leveraging technology for civic engagement. I am grateful for the warm welcome, the technical help, and the inspiring conversations from this week. I am eager to collaborate and contribute to the work of the lab. 🙂

Starting My High School Summer Internship at the Northeastern Civic AI Lab

By: High school Intern Simon Juknelis


Hi! I’m Simon Juknelis, a rising high school senior at Noble and Greenough in Dedham, MA, and this week, I am beginning my work as an intern at the Civic AI Lab at Northeastern University. I’ve always been interested in building projects that help other people and make an impact, and that’s what I hope to achieve over the course of this internship.

The lab’s overall mission is to build new technology solutions that create equitable positive impacts and that empower all members of society. To achieve this goal, the lab works with non-profit organizations such as the National Science Foundation and UNESCO as well as tech industry leaders such as Twitch and Meta. The lab has done research into a large number of areas such as preventing disinformation using AI and data labeling work, and I’m very excited to get to work with this team!

Participatory Design

Our lab’s work over the coming months will involve building software solutions for use by and to benefit gig workers. As such, I read up on the methodological framework of participatory design. Participatory design encapsulates the idea of giving the users of a product the power to shape it to fit their needs. Participatory design can be carried out with interviews and workshop sessions with a sample of potential users of the product. The future users should be given the ability to give suggestions during the ideation phase of the product as well as at various stages throughout its development.
Our lab will be using participatory design over the next few months in order to conduct our research and build solutions. As we work on finding research participants and setting up interviews, I decided to test my technical skills by building a small web plugin called ProductWords, which allows users to look through Amazon products, add them to a list, and see statistics about them.

Participatory Design for Gig Workers

In fields like gig work, there can often be a large power imbalance between a single worker, on one hand, and the corporate clients and work platforms, on the other hand, that provide the gig worker’s income. As such, when tools are designed for gig workers, they are often designed without gig workers’ actual needs in mind, and instead are designed by the platforms based on what they think the workers need or even based on what would benefit them or the clients. As such, participatory design is an important tool to ensure that tools built for gig workers actually benefit those workers.

Designing Tools for Gig Workers with Figma

Part of our lab’s work will involve using a software platform called Figma for UI/UX design. One of the main benefits of using a service like Figma is the collaborative benefits it provides. It allows for ideas about interface layout, animations, and functionality to be more easily communicated between team members, and multiple team members can work together on the same files to create a unified design workspace.
I used Figma to design the interface for ProductWords. Doing it this way was especially helpful because I have not yet finished implementing all of these visual elements into ProductWords, but I still have a good sense of what I want the final product to look like and I’ll be able to look back on this Figma doc to see what I should implement.

Data Visualizations for Gig Workers

Our lab is also planning on making extensive use of data visualization in our research on the gig economy. One common and easy-to-understand form of data visualization is the word cloud, which displays words at a size corresponding to their frequency in a given text. One of the resources our team was using described how to create a word cloud using a Python library; however, as I was building ProductWords as a web plugin, I needed to find a way to do this with JavaScript.
I found a JavaScript library called D3 which is a general-purpose solution for creating visual representations of data to be displayed on web pages. Combined with an extension for D3 created by Jason Davies, I was able to create word clouds based on the descriptions of the Amazon products in the list.

Other Technical Aspects

One of the main reasons I decided to make this web plugin was that I wanted to practice some of the features of web plugins that we might want to use for our lab’s research. With ProductWords, I implemented web scraping (pulling the Amazon item description and price information), a popup page, and communication between the web-scraper background script and the popup script.
ProductWords is not a very useful plugin yet, but I got some good practice implementing the features that will probably be necessary for any version of the web plugin(s) our lab will work on, and maybe it could even be used as a jumping-off point that gets evolved into our final product.

Human Centered AI Live Stream: Sota Researcher!

Girl in a jacket

Research engineer Phil Butler from our lab is starting a new live stream on Human Centered AI. Through his live stream he will help you design and implement AI for people.

  • In each live stream you will learn how to design and create AI for people from start to finish. He will teach you how to use different design methologies such as mockups,storyboards, service design; as well as different AI models and recent state of the art techniques. In each live stream you will have code that will help you to have a complete AI for people project.
  • Some of the topics he will cover in his live stream include: Understanding and Detecting Bias in AI; Design principles for Designing Fair and Just AI; How to Create Explainable AI.
  • The streams will benefit anyone who wants to learn how to create AI on their own, while also respecting human values.
  • The stream will help people to learn about how to implement AI using state of the art techniques (which is key for getting top industry jobs), while also being ethical and just about the AI that is created.

Join us!

Designing Public Interest Tech to Fight Disinformation.

By: Victor Storchan

Our research lab organized a series of talks with NATO around how to design
public interest infrastructure to fight disinformation globally. Our
collaborator Victor Storchan wrote this great piece on the topic:

Disinformation has increasingly become one of the most preeminent threats as well as a global challenge for democracies and our modern societies. It is now
entering in a new era where the challenge is two-fold: it has become both a socio-political problem and a cyber-security problem. Both aspects have to be
mitigated at a global level but require different types of responses.

Let’s first give some historical perspective.

  • Disinformation didn’t emerge with automation and social networks platforms of
    our era. In the 1840s Balsac was already describing how praising or denigrating
    reviews were spreading in Paris to promote or downgrade publishers of novels or
    the owners of theaters. Though, innovation and AI in particular gave rise to
    technological capabilities to threat actors that are now able to scale
    misleading content creation.
  • More recently, in the 2000s the technologists
    were excited about the ethos around moving fast and breaking things. People were
    basically saying “let’s iterate fast, let’s shift quickly and let’s think about
    the consequences later.”
  • After the 2010s, and the rise of deep learning
    increasingly used in the industry, we have seen a new tension emerging between
    velocity and validation. It was not about the personal philosophy of the
    different stakeholders asking for going “a little bit faster” or “a little bit
    slower” but rather about the cultural and organizational contexts of most of the
  • Now, AI is entering the new era of foundation models. With
    large language models we have consumer-facing powered tools like search engines
    or recommendation systems. With generative AI, we can turn audio or text in
    video at scale very efficiently. The technology of foundation models is at the
    same time becoming more accessible to users, cheaper and more powerful than
    ever. It means better AI to achieve complex tasks, to solve math problems, to
    address climate change. However, it also means cheap fake media generation
    tools, cheap ways to propagate disinformation and to target victims.

This is the moment where we are today. Crucially, disinformation is not only a
socio-political problem but also a cyber-security problem. Cheap deep-fake
technology is commoditized enabling targeted disinformation where people will
receive specific, personalized disinformation through different channels (online
platforms, targeted emails, phone). It will be more fine grained. It has already
started to affect their lives, their emotions, their finances, their health.

The need for a multi-stakeholder approach as close as possible to the AI
system design.
The way we mitigate disinformation as a cyber-security problem is
tied to the way we are deploying large AI systems and to the way we evaluate them. We need new auditing tools and third parties auditing procedures to make sure that those deployed systems are trustworthy and robust to adverse threats or to toxic content dissemination. As such AI safety is not only an engineering problem but it is really a multi stakeholder challenge that will only be addressable if non-technical parties are included in the loop of how we design the technology. Engineers have to collaborate with experts in cognition, psychologists, linguists, lawyers, journalists, civil society in general). Let’s give a concrete example: mitigating disinformation as a cyber security problem means protecting the at-risk user and possibly curing the affected user. It may require access to personal and possibly private information to create effective counter arguments. As a consequence it implies arbitrating a tradeoff between privacy and disinformation mitigation that engineers alone cannot decide. We need a multi stakeholder framework to arbitrate such tradeoffs when building AI tooling as well as to improve transparency and reporting.

The need for a macroscopic multi-stakeholder approach. Similarly, at a macroscopic level, there is a need for a profound global cooperation and coalition of researchers to address disinformation as a global issue. We need international cooperation at a very particular moment in our world which is being reorganized. We are living in a moment of very big paradox: we see new conflicts that emerge and structure the world and at the same time, disinformation requires international cooperation. At the macroscopic level, disinformation is not just a technological problem, it is just one additional layer on top of poverty, inequality, and ongoing strategic confrontation. Disinformation is one layer that adds to the international disorder and that amplifies the other ones. As such, we also need a multi stakeholder approach bringing together governments, corporates, universities, NGOs, the independent research community etc… Very concretely, Europe has taken legislative action DSA to regulate harmful content but it is now clear that regulation alone won’t be able to analyze,detect, and identify fake media. To that regard, the Christchurch call to action summit is a positive first step but did not lead yet to a systemic change.

The problem of communication. However, the communication between engineers, AI scientists and non-technical stakeholders generates a lot of friction. Those multiple worlds don’t speak the same language. Fighting disinformation is not only a problem of resources (access to data and access to compute power) but it is also a problem of communication where we need new processes and tooling to redefine the way we collaborate in alliance to fight disinformation. Those actors are collaborating in a world where it is becoming increasingly difficult to understand AI capabilities and as a consequence to put in place the right mechanisms to fight adverse threats like disinformation. It is more and more difficult to really analyze the improvement of AI. It is what Gary Marcus is calling the demoware effect: a technology that is good for demo but not in the real world. It is confusing people and not only political leaders but also engineers (Blake Lemoine at Google). Many leaders are assuming false capabilities about AI and struggle monitoring it. Let us give two reasons to try to find the causes of this statement. First, technology is more and more a geopolitical issue which does not encourage more transparency and more accountability. Second, information asymmetry between the private and public sectors and the gap between the reality of the technology deployed in industry and the perception of public decision-makers has grown considerably, at the risk of focusing the debate on technological chimeras that distract from the real societal problems posed by AI like disinformation and the ways to fight it.

Girl in a jacket

Recap HCOMP 2022

This week we attended the AAAI conference on Human Computation and Crowdsourcing (HCOMP’22). We were excited about attending for several reasons: (1) we were organizing HCOMP’s CrowdCamp, excited about having the power to drive the direction of this event within the conference!, (2) it was the 10-year anniversary of the conference and we were elated to reflect collectively on where we have come as a field over the years, (3) we chaired one of the keynotes of HCOMP, in particular, our PhD hero, Dr. Seth Cooper, and (4) we had an important announcement to share with the community!


Organizing CrowdCamp.

This year, Dr. Anhong Guo from the University of Michigan and me had the honor of organizing HCOMP’s CrowdCamp, a very unique part of the HCOMP conference. It is a type of mini hackathon where you get together with crowdsourcing experts and define the novel research papers and prototypes that push forward the state of the art around crowdsourcing. Previous CrowdCamps led to key papers in the field, such as the Future of Crowd Work paper and my own CHI paper on Subcontracting Micro Work.

This year, when we put out the call for CrowdCamp, we witnessed an interesting dynamic. A large number of participants were students, novices to crowdsourcing, but they had great interest in learning and then impacting the field. This dynamic reminded me of what I had encountered when I organized my first hackathon, FixIT: the participants had great visions and energy for changing the world! But they also had limited skills to execute their ideas. They lacked data to determine if their ideas were actually something worth pursuing. To address these challenges, in the past, I gave hackathon participants bootcamps to ramp up their technical skills (this allowed them to execute some of their visions). We also taught these participants about human centered design to empower them to create artifacts and solutions that match people’s needs, and not a hammer in need of nails.

For CrowdCamp, we decided to do a similar thing:
We had a mini-bootcamp, organized by Toloka (a crowdsourcing platform), that explained how to design and create crowd-powered systems. The bootcamp started with a short introduction on what is crowdsourcing, common types of crowdsourcing projects (like image/text/audio/video classification) and interesting ones (like side-by-side comparison, data collection and spatial crowdsourcing). After that, the bootcamp introduced the Toloka platform and some of its unique features. Then the bootcamp briefly presented Toloka Python SDK (Toloka-Kit and Crowd-Kit) and moved to an example project on creating a crowd powered system, especially a face detection one. The code used in the Bootcamp is in the following Google Collab:

We taught human centered design, and had a panel with real world crowdworkers who shared their experiences and needs. The participants were empowered to design better crowdworkers and create more relevant technologies for them, as well as technologies that would better coordinate crowdworkers to produce higher quality work. The crowdworkers who participated in CrowdCamp all came from Africa, and they shared how crowd work had provided them with new job opportunities that were typically not available in their country. Crowd work helped to complement their expenses (a side job). They were motivated to participate in crowd work for the additional money received, also knowing that they were contributing to something bigger than themselves (e.g., labeling images that will ultimately help to power self-driving cars.) Some of the challenges these crowdworkers experienced included unpaid training sessions. It was unclear sometimes whether the training sessions were worth it or not. They also discussed the importance of building worker communities.

CrowdCamp ended up being a success with over 70 people who had registered and then created a number of different useful tools for crowdworkers. The event was hybrid with people on the east coast joining us at Northeastern university. We had delicious pizza and given that we were in Boston, delicious Dunkin Donuts 🙂

Chairing Professor Seth Cooper’s Keynote.

We had the honor of chairing the keynote of Professor Seth Cooper, an Associate Professor at the Khoury College of Computer Sciences. He previously worked for Pixar Animation Studios and Electronic Arts, a big game maker. Seth is also the recipient of an NSF career grant.
Professor Cooper’s research has focused on using video games and crowdsourcing techniques to solve difficult scientific problems. He is the co-creator and lead designer, as well as developer of Foldit, a scientific discovery game that allows regular citizens to advance the field of biochemistry. Overall, his research combines scientific discovery games (particularly in computational structural biochemistry), serious games, and crowdsourcing games. A pioneer in the field of scientific discovery games, Dr. Cooper has shown video game players are able to outperform purely computational methods for certain types of structural biochemistry problems, effectively codifying their strategies, and integrating them in the lab to help design real synthetic molecules. He has also developed techniques to adapt the difficulty of tasks to individual game players and generate game levels.

Seth’s talk discussed how he is using crowdsourcing to improve video games, and video games to improve crowdsourcing. What does this mean? In his research, Professor Cooper integrates crowd workers to help designers improve their video games. For example, he integrates crowds to help them test just how hard or easy the game they are creating is. It enables designers to identify how easy it is for gamers to advance within the different stages of a game. The integration of crowdworkers allows gamers to easily iterate and improve their video game. Dr. Cooper is also integrating gaming to improve crowdsourcing. In particular, he has studied how he can integrate games to improve the quality of work produced by crowd workers.

During the Questions of Professor Cooper, some interesting questions emerged:
What types of biases do crowdworkers bring to the table when co-designing video games? It was unclear whether crowdworkers are actually similar to how typical gamers would play a video game. Hence, the audience wondered just how much designers actually use the results of the way crowdworkers engage with a video game. Professor Seth mentioned that in his research, he found that crowdworkers are similar to typical gamers in playing games. A difference is that typical gamers (voluntarily playing a game instead of getting paid to play) will usually focus more on the aspects of the game they like the most. Crowdworkers will focus on exploring the whole game instead of focusing on particular parts (because of the role that the payments play). Perhaps, these crowdworkers feel that by exploring the whole game, they are better showcasing to the requester (designer) that they are indeed playing the game and not slacking off. Some people have a gaming style that focuses on the “catch-them all” approach (an exploratory mode). However, the “catch-them-all” term is used in reference to Pokemon, where people are interested in being able to explore the entire game and collect all the different elements (e.g., Pokemons).

How might we integrate game design to help crowdworkers learn? Dr. Flores-Saviaga posed an interesting question about the role games could play in facilitating the career development of these workers. Professor Seth expressed an interest in this area while also mentioning that you can imagine that workers instead of earning badges within the game could earn real certificates that translate into new job opportunities.

What gave him confidence that the gaming approach in crowdsourcing is worth pursuing? When Foldit came out, it was unclear that gaming would actually be useful for mobilizing citizen crowds to complete complex scientific tasks. The audience wanted to know what led him to explore this path. Professor Cooper explained that part of it was taking a risk down a path he was passionate about: gaming. I think for PhD students and other new researchers starting out, it can be important to trust your intuition and conduct research that personally interests you. In research, you will take risks, which makes it all the more exciting 🙂

Girl in a jacket

Dr. Jenn Wortman’s Keynote.

We greatly enjoyed the amazing keynote given by Dr. Jenn Wortman Vaughan (@jennwvaughan) at HCOMP 2022. She presented her research in Responsible AI, specially interpretability and fairness in AI systems.

A takeaway is that there are challenges in the design of interpretability tools for data scientists such as IntrepretML or SHAP Python package, where they found that these tools lead to over-trusting and misusing how ML models work. For more info, look at her CHI 2020 paper: “Interpreting Interpretability”

Dr. Jeffrey Bigham’s Keynote.

An incredible keynote given by Dr. Jeffrey Bigham at HCOMP 2022. He presented work developed in Image Description for 17 years! He showed different connections (loops) in finding the right problem and right solution in Image Description, such as Computer Vision, Real-Time Recruitment, Gig Workers, Conversations with the Crowd, Datasets, etc.

A takeaway is that there could be different interactions or loops in the process of Applying Machine Learning and HCI as seen in the image below from the Problem selection until the Deployment of the system.

Doctoral Consortium.

The HCOMP doctoral consortium was led by Dr. Chien-Ju Ho and Dr. Alex Williams. The consortium is an opportunity for PhD students to share their research with crowdsourcing and human computation experts. Students have the opportunity to meet other PhD students, industry experts, and researchers to expand their network and receive mentoring from both industry and academia. Our lab participated in the proposal “Organizing Crowds to Detect Manipulative Content.” A lab member, Claudia Flores-Saviaga, presented the research she has done in this space for her PhD thesis.

Exciting news for the HCOMP community!

The big news I want to share is that I have the honor of being the co-organizer of next year’s HCOMP! I will co-organize it with Alessandro Bozzon and Michael Bernstein. We are going to host the conference in Europe, and it will be united with the Collective Intelligence conference. Our theme is about reuniting and helping HCOMP grow in size by connecting with other fields, such as human centered design, citizen science, data visualizations, and serious games. I am excited to have the honor and opportunity to build the HCOMP conference.

Girl in a jacket

List of MIT Tech Review Inspiring Innovators

We are part of the amazing network of the 35 Innovators under 35 by the MIT Tech Review. We got invited to their EmTech Event and had amazing dinner with other innovators and people having an impact in the field. We are very thankful with Bryan Bryson for the invitation, and we also wanted to congratulate him and his team for all the work done to build such a vibrant innovation ecosystem.

I share below a list of some of the innovators I meet. Keep an eye on them and their research!

Setor Zilevu (Meta and Virginia Tech).

He is working at the intersection of human-computer interaction and machine learning to create semi-automated, in-home therapy for stroke patients. After his father suffered a stroke, Zilevu wanted to understand how to integrate those two fields in a way that would enable patients at home to get the same type of therapy, including high-quality feedback, that they might get in a hospital. The semi-­automated human-computer interaction, which Zilevu calls the “tacit computable empower” method, can be applied to other domains both within and outside health care, he says.

Sarah B. Nelson (Kyndryl)

She is Chief Design Officer and Distinguished Designer for Kyndryl Vital, Kyndryl’s designer-led co-creation experience. From the emergence of the web through the maturity of user experience practice, Sarah is known throughout the design industry as a thought leader in design-led organizational transformation, participatory, and forward-looking design capability development. At Kyndryl, she leads the design profession, partnering with technical strategists to integrate experience ecosystem thinking into the technical solutions. Sarah is an encaustic painter and passionate surfer.

Moses Namara (Meta and Clemson University).

Namara co-­created the Black in Artificial Intelligence graduate application mentoring program to help students applying to graduate school. The program, run through the resource group Black in AI, has mentored 400 applicants, 200 of whom have been accepted to competitive AI programs. It provides an array of resources: mentorship from current PhD students and professors, CV evaluations, and advice on where to apply. Namara now sees the mentorship system evolving to the next logical step: helping Black PhD and master’s students find that first job.

Joanne Jang (OpenAI)

Joanne Jang is the product lead of DALL·E, an AI system by OpenAI that creates original images and artwork from a natural language description. Joanne and her team were responsible for turning the DALL·E research into a tool people can use to extend their creative processes and for building safeguards to ensure the technology will be used responsibly. The DALL·E beta was introduced in July 2022 and now has more than 1 million users.

Daniel Salinas (Colombia: Super Plants)

Su ‘start-up’ monitoriza las plantas con nanotecnología al conectarlas con ordenadores y facilita la descarbonización. Los humanos tienen ‘ceguera a las plantas’. Nuestros sesgos nos impiden percibirlas como sí hacemos con los animales. Esta desconexión planta-humano lleva a que los proyectos de plantar árboles para capturar carbono frente a la crisis climática no sean sostenibles si la reforestación no se mantiene en el tiempo. El estudiante de Emprendimiento colombiano Daniel Salinas descubrió la falta de infraestructuras en la lucha para la descarbonización con una ‘start-up’ de plantación de árboles. El joven recuerda: “Cada vez que íbamos al terreno teníamos problemas”. Para romper esta desconexión entre personas y árboles, Salinas ha creado una interfaz planta-ordenador que permite hacer un seguimiento de la vegetación con su start-up Superplants. Con esta aportación, Salinas ha logrado ser uno de los Innovadores menores de 35 Latinoamérica 2022 de MIT Technology Review en español.

Relevant References:

Fighting online trolls with bots

An animated troll typing on a laptop

Reposted from the conversation:

The wonder of internet connectivity can turn into a horror show if the people who use online platforms decide that instead of connecting and communicating, they want to mock, insult, abuse, harass and even threaten each other. In online communities since at least the early 1990s, this has been called “trolling.” More recently it has been called cyberbullying. It happens on many different websites and social media systems. Users have been fighting back for a while, and now the owners and managers of those online services are joining in.

The most recent addition to this effort comes from Twitch, one of a few increasingly popular platforms that allow gamers to play video games, stream their gameplay live online and type back and forth with people who want to watch them play. Players do this to show off their prowess (and in some cases make money). Game fans do this for entertainment or to learn new tips and tricks that can improve their own play.

a screenshot of a video game that shows advice from spectators

When spectators get involved, they can help a player out. Saiph Savage, CC BY-ND

Large, diverse groups of people engaging with each other online can yield interesting cooperation. For example, in one video game I helped build, people watching a stream could make comments that would actually give the player help, like slowing down or attacking enemies. But of the thousands of people tuning in daily to watch gamer Sebastian “Forsen” Fors play, for instance, at least some try to overwhelm or hijack the chat away from the subject of the game itself. This can be a mere nuisance, but can also become a serious problem, with racism, sexism and other prejudices coming to the fore in toxic and abusive comment threads.

In an effort to help its users fight trolling, Twitch has developed bots – software programs that can run automatically on its platform – to monitor discussions in its chats. At present, Twitch’s bots alert the game’s host, called the streamer, that someone has posted an offensive word. The streamer can then decide what action to take, such as blocking the user from the channel.

Trolls can share pornographic images in a chat channel, instead of having conversations about the game. Chelly Con Carne/YouTube, CC BY-ND

an example of images shared by trolls

Beyond just helping individual streamers manage their audiences’ behavior, this approach may be able to capitalize on the fact that online bots can help change people’s behavior, as my own research has documented. For instance, a bot could approach people using racist language, question them about being racist and suggest other forms of interaction to change how people interact with others.

Using bots to affect humans

In 2015 I was part of a team that created a system that uses Twitter bots to do the activist work of recruiting humans to do social good for their community. We called it Botivist.

We used Botivist in an experiment to find out whether bots could recruit and make people contribute ideas about tackling corruption instead of just complaining about corruption. We set up the system to watch Twitter for people complaining about corruption in Latin America, identifying the keywords “corrupcion” and “impunidad,” the Spanish words for “corruption” and “impunity.”

When it noticed relevant tweets, Botivist would tweet in reply, asking questions like “How do we fight corruption in our cities?” and “What should we change personally to fight corruption?” Then it waited to see if the people replied, and what they said. Of those who engaged, Botivist asked follow-up questions and asked them to volunteer to help fight the problem they were complaining about.

We found that Botivist was able to encourage people to go beyond simply complaining about corruption, pushing them to offer ideas and engage with others sharing their concerns. Bots could change people’s behavior! However, we also found that some individuals began debating whether – and how – bots should be involved in activism. But it nevertheless suggests that people who were comfortable engaging with bots online could be mobilized to work toward a solution, rather than just complaining about it.

Humans’ reactions to bots’ interventions matter, and inform how we design bots and what we tell them to do. In research at New York University in 2016, doctoral student Kevin Munger used Twitter bots to engage with people expressing racist views online. Calling out Twitter users for racist behavior ended up reducing those users’ racist communications over time – if the bot doing the chastising appeared to be a white man with a large number of followers, two factors that conferred social status and power. If the bot had relatively few followers or was a black man, its interventions were not measurably successful.

Raising additional questions

Bots’ abilities to affect how people act toward each other online brings up important issues our society needs to address. A key question is: What types of behaviors should bots encourage or discourage?

It’s relatively benign for bots to notify humans about specifically hateful or dangerous words – and let the humans decide what to do about it. Twitch lets streamers decide for themselves whether they want to use the bots, as well as what (if anything) to do if the bot alerts them to a problem. Users’ decisions not to use the bots include both technological factors and concerns about comments. In conversations I have seen among Twitch streamers, some have described disabling them for causing interference with browser add-ons they already use to manage their audience chat space. Other streamers have disabled the bots because they feel bots hinder audience participation.

But it could be alarming if we ask bots to influence people’s free expression of genuine feelings or thoughts. Should bots monitor language use on all online platforms? What should these “bot police” look out for? How should the bots – which is to say, how should the people who design the bots – handle those Twitch streamers who appear to enjoy engaging with trolls?

One Twitch streamer posted a positive view of trolls on Reddit:

“…lmfao! Trolls make it interesting […] I sometimes troll back if I’m in a really good mood […] I get similar comments all of the time…sometimes I laugh hysterically and lose focus because I’m tickled…”

Other streamers even enjoy sharing their witty replies to trolls:

“…My favorite was someone telling me in Rocket League “I hope every one of your followers unfollows you after that match.” My response was “My mom would never do that!” Lol…”

What about streamers who actually want to make racist or sexist comments to their audiences? What if their audiences respond positively to those remarks? Should a bot monitor a player’s behavior on his own channel against standards set by someone else, such as the platform’s administrators? And what language should the bots watch for – racism, perhaps, but what about ideas that are merely unpopular, rather than socially damaging?

At present, we don’t have ways of thinking about, talking about or deciding on these balancing acts of freedom of expression and association online. In the offline world, people are free to say racist things to willing audiences, but suffer social consequences if they do so around people who object. As bots become more able to participate in, and exert influence on, our human interactions, we’ll need to decide who sets the standards and how, as well as who enforces them, in online communities.


¿Cómo aplicar la inteligencia artificial para eficientar los trámites de gobierno?

a graphic showing a brain on top of a desk surrounded by computers

Reposted from the blog of the Inter-American Development Bank:

El gran reto de la implementación de tecnologías de inteligencia artificial (IA) en contextos gubernamentales, es encontrar la manera de hacerlas cada vez más user friendly y más confiables, así como de integrar en ellas una perspectiva de bienestar social digital.

En este artículo compartiremos algunas pautas a tomar en cuenta cuando utilizamos la inteligencia artificial para agilizar trámites gubernamentales. Utilizaremos como ejemplo nuestra más reciente experiencia durante la creación de una nueva plataforma para el trámite del pasaporte mexicano. Nuestra misión como Laboratorio de Innovación Cívica perteneciente a la Universidad Autónoma de México (UNAM), y como Equipo de Nuevas Tecnologías de la Secretaría de Relaciones Exteriores (SRE) en México, es crear innovaciones que faciliten que los y las ciudadanas se acerquen a su gobierno y se familiaricen con sus procesos.

1. Conceptualizar el problema y buscar soluciones apropiadas

Entendimos que lo más relevante en este proyecto era comprender y responder de manera clara y eficiente a las consultas que harían los usuarios de este servicio, por lo que decidimos trabajar en el desarrollo de asistentes virtuales inteligentes (que son programas de computadora que utilizan la IA), ya que estos cuentan con dos cualidades concretas:

  • Ofrecen a las y los usuarios la oportunidad de interactuar desde su dispositivo móvil sin necesidad de recurrir a una computadora.
  • Permiten que los trámites se hagan mediante mensajes de texto sin tener que realizar una llamada telefónica.

2. Entender el contexto social de la población usuario

Identificar la tecnología que mejor responda al problema es importante, pero también hay que estudiar de qué manera la IA va a interactuar con sus usuarios y usuarias. Si bien es cierto que gran parte de los asistentes virtuales están diseñados para dar respuestas genéricas, educar a estos asistentes sobre el contexto social y por ende sobre las expectativas de los ciudadanos y las ciudadanas al utilizar estas tecnologías, podría traducirse en tasas de uso más altas.

En este caso, para diseñar nuestros asistentes virtuales tomamos como referencia el modelo 6-D de Geert Hofstede. En las siguientes viñetas resaltamos algunas de las características que, de acuerdo con este modelo, sobresalen en la cultura mexicana, y explicamos cómo abordamos estos retos en la construcción de nuestros asistentes virtuales:

  • En la cultura mexicana se valoran las relaciones interpersonales: procuramos fortalecer la conexión con los ciudadanos y las ciudadanas al implementar un lenguaje amigable, que incluye stickers y emojis, con la idea de que se propicie cercanía y cordialidad en cada interacción con el asistente virtual.
  • En la cultura mexicana se siente frustración cuando un proceso produce un resultado incierto: se buscó que el flujo de conversación fuera claro y conciso. Para esto usamos Botpress, para crear sistemas basados en reglas de tipo “si entonces” (if-then), ej. “si el ciudadano/la ciudadana tiene la intención de renovar su pasaporte y pide ayuda para este trámite, entonces muéstrale las instrucciones para que pueda completar ese trámite”.  Este tipo de inteligencia artificial tiene salidas predefinidas, en este caso, las salidas son tipos de trámites gubernamentales que se muestran con base en las reglas definidas por los expertos, que a su vez son los que mapean las diferentes intenciones de los ciudadanos y las ciudadanas; todo esto con la idea de propiciar certidumbre.
  • La cultura mexicana es policrónica (se llevan a cabo varias tareas al mismo tiempo): hicimos que nuestros asistentes virtuales guiaran puntualmente y recordaran a las y los usuarios qué hacer para terminar el trámite en cuestión. Resaltamos el hecho de que se puede acceder a nuestros asistentes virtuales desde cualquier celular con acceso a internet, lo que facilita que se realicen los trámites a la par de otras actividades cotidianas.

3. Trabajar en un diseño que sea representativo

Una vez llevado a cabo el trabajo de conceptualización y contextualización, hay que pensar en como representar a nuestro proyecto de tal manera que incentive a los usuarios y las usuarias a utilizar la herramienta.  Esta es la parte final de nuestro proceso previo a la implementación de esta tecnología, y para diseñar la imagen de dos asistentes virtuales seguimos dos líneas de trabajo:

3.1 Sesiones con diseñadores y ciudadanía

Realizamos entrevistas y sesiones de diseño con la ciudadanía para imaginar, junto con el equipo, cómo debería de verse el asistente virtual. Al terminar las entrevistas hicimos un análisis cualitativo con las ideas compartidas y desarrollamos los siguientes gráficos:

icons that show choices labeled letter A through F

Con estas opciones, nuestro siguiente paso fue hacer una encuesta para determinar la imagen más popular en una muestra representativa de la población.

a bar graph showing results for virtual assistant images

La imagen del pasaporte resultó la favorita, especialmente porque su conexión con los trámites del gobierno es inmediata y mantiene afinidad con el país.

3.2 Sesiones con escritores de ficción

Para el diseño de nuestro segundo asistente recurrimos a reconocidos escritores de ficción para asesorarnos sobre cómo debería de lucir la imagen del asistente virtual. Ellos, acordaron que debería de ser un tlacomiztli, ya que es un animal tradicional mexicano que conecta con los pueblos nativos. El siguiente paso fue nombrar a nuestro nuevo asistente virtual, quien se llama Mixtli, y quien tiene una apariencia de intelectual y va vestido de manera formal para mostrar que trabaja en un puesto de gobierno.

Al final de estos dos ejercicios, ¡nuestros asistentes tienen un rostro!

a graphic showing a raccoon sitting at a desk and an animated character holding a laptop

4. Crear valor para la ciudadanía y también para los funcionarios y las funcionarias de gobierno

Nuestros asistentes virtuales se pensaron no solamente para guiar a la ciudadanía en el trámite de sus pasaportes, sino también para apoyar y aligerar, en la medida de lo posible, la carga de trabajo de los funcionarios y las funcionarias de gobierno involucrados en estas gestiones.

Con el propósito de evitar la repetición mecánica y ayudar a los trabajadores a concentrarse en tareas que requieren su conocimiento especializado, diseñamos mecanismos computacionales con los cuales se delega al asistente virtual las tareas repetitivas, como por ejemplo responder dudas sobre la hora de cierre y apertura del consulado. Esos mismos mecanismos permiten que los funcionarios y las funcionarias puedan integrarse a la conversación si es que el o la ciudadana se topa con situaciones cuya solución requiera del juicio humano.

Algunas conclusiones preliminares

Nuestros asistentes virtuales ya están siendo implementados bajo un tipo de prueba A/B (A/B testing) que nos permite entender cómo interactúan los y las ciudadanas con ellos. Un primer monitoreo de su rendimiento sugiere que ambos asistentes se complementan, ya que Mixtli, el asistente virtual creado por los escritores de ficción, parece despertar curiosidad entre la ciudadanía por el trabajo que desarrolla la SRE; lo que indica que puede ser prometedor ubicarlo como el asistente que responda a las dudas de trámites generales. Por su parte, las interacciones con el avatar del pasaporte sugieren que este asistente sería más funcional cuando se trate de guiar específicamente en el proceso del trámite del pasaporte. Subrayamos que estas son conclusiones tempranas y que seguimos monitoreando el impacto de este proyecto.

Entendemos que todavía queda un camino para modernizar y seguir agilizando, por medio de la implementación de tecnologías de IA, los trámites gubernamentales en México. Aún así, consideramos que cada esfuerzo y cada proyecto, nos acerca más a esa meta y nuestros equipos están comprometidos a seguir desarrollando las herramientas necesarias para que la ciudadanía y los funcionarios y las funcionarias de gobierno puedan realizar sus trámites y gestiones de la mejor manera posible.

Si te pareció interesante nuestra descripción de cómo implementamos la AI para agilizar el trámite del pasaporte en México, te invitamos a leer este breve tutorial de Botpress, la plataforma que utilizaron nuestros equipos para el desarrollo de los asistentes virtuales. En la página también puedes bajar el software en su formato open source y comenzar a experimentar en él, quizá inclusive te animes a aprovechar esta tecnología para desarrollar soluciones útiles en tu comunidad.


Savvy social media strategies boost anti-establishment political wins

a politician speaks at a podium

September 12, 2018 11.52am BST Mexican President-elect Andrés Manuel López Obrador. AP Photo/Marco Ugarte

Reposted from the conversation.

By Saiph Savage, Claudia Flores-Saviaga

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Mexico’s anti-establishment presidential candidate, Andrés Manuel López Obrador, faced opposition from the mainstream media. And he spent 13 percent less on advertising than his opponents. Yet the man commonly known by his initials as “AMLO” went on to win the Mexican presidency in a landslide with over 53 percent of the vote in a four-way race in July.

That remarkable victory was at least partly due to the social media strategies of the political activists who backed him. Similar strategies appeared in the 2016 U.S. presidential election and the 2017 French presidential race.

Our lab has been analyzing these social media activities to understand how they’ve worked to threaten – and topple – establishment candidates. By analyzing more than 6 million posts from Reddit, Facebook and Twitter, we identified three main online strategies: using activist slang, attempting to “go viral” and providing historical context.

Redditors’ responses to information strategies

In our study of activity in a key Trump-supporting subreddit, citizens tended to engage most with posts that explained the political ecosystem to them, commenting more on them and giving more upvotes of support.

a chart measuring engagement of reddit posts

Some of these strategies might simply be online adaptations of long-standing strategies used in traditional offline campaigning. But others seem to be new ways of connecting and driving people to the polls. Our lab was interested in understanding the dynamics behind these online activists in greater detail, especially as some had crossed over from being merely supporters – even anonymous ones – not formally affiliated with campaigns, to being officially incorporated in campaign teams.

Integrating activist slang

Some political activists pointedly used slang in their online conversations, creating a dynamic that elevated their candidate as an opponent of the status quo. Trump backers, for instance, called themselves “Deplorables,” supporting “the God Emperor” Trump against “Killary” Clinton.

AMLO backers called themselves “AMLOVERS” or “Chairos,” and had nicknames for his opponents, such as calling the other presidential candidate, Ricardo Anaya, “Ricky Riquin Canayin” – Spanish for “The Despicable Richy Rich.”

Efforts to ‘go viral’

Some political activists worked hard to identify the material that was most likely to attract wide attention online and get media coverage. Trump backers, for instance, organized on the Discord chat service and Reddit forums to see which variations of edited images of Hillary Clinton were most likely to get shared and go viral. They became so good at getting attention for their posts that Reddit actually changed its algorithm to stop Trump backers from filling up the site’s front page with pro-Trump propaganda.

Similarly, AMLO backers were able to keep pro-AMLO hashtags trending on Twitter, such as #AMLOmania, in which people across Mexico made promises of what they would do for the country if AMLO won. The vows ranged from free beer and food in restaurants to free legal advice.

For instance, an artist promised to paint an entire rural school in Veracruz, Mexico, if AMLO won. A law firm promised to waive its fees for 100 divorces and alimony lawsuits if AMLO won. The goal of citizen activists was to motivate others to support AMLO, while doing positive things for their country.

The historian-style activists

examples of explanatory materials shared on social media

Historian-style activists created explanatory materials to share on social media: a) backing AMLO with a visual description of his economic plan; b) Helping Trump backers ‘red-pill liberals,’ waking them up to a conservative reality. Saiph Savage and Claudia Flores-Saviaga, CC BY-ND

Some anti-establishment activists were able to recruit more supporters by providing detailed explanations of the political system as they saw it. Trump backers, for instance, created electronic manuals advising supporters how to explain their viewpoint to opponents to get them to switch sides. They compiled the top WikiLeaks revelations about Hillary Clinton, assembled explanations of what they meant and asked people to share it.

Pro-AMLO activists did even more, creating a manual that explained Mexico’s current economics and how the proposals of their candidate would, in their view, transform and improve Mexico’s economy.

Our analysis identified that one of the most effective strategies was taking time to explain the sociopolitical context. Citizens responded well to, and engaged with, specific reasoning about why they should back specific candidates.

As the U.S. midterm elections approach, it’s worth paying attention to whether – and in what races – these methods reappear; and even how people might use them to engage in fruitful political activism that brings the changes they want to see. You can read more about our research in our new ICWSM paper.


Activist Bots: Helpful But Missing Human Love?

Saiph Savage Nov 29, 2015

an image showing police trying to contain protestors

Activist group who helped us to design Botivist, a system for coordinating activist bots that call people to action to recruit them for activism.

Political bots are everywhere, swamping online conversations to promote a political candidate, sometimes even engaging in swiftboating…But, instead of continuing to build more political bots, what about creating bots for people, e.g., activists? What do bots for social change look like?

It might help us to first think about when might activists need bots?

Activists can suffer extreme dangers, including being murdered. Given that bots can remove responsibility from humans, we could think of designing bots that execute and take responsibility for tasks that are dangerous for human activists to do. At the end of the day, what happens if you kill a bot?

Our interviews with activists have also highlighted that activists have to spend excessive time in recruitment, i.e., trying to convince people to join their cause. While obtaining new members is crucial to the long term survival of any activist groups, activists spend sometimes excessive time trying to convince people who at the end might never participate. Plus, it can be hard for humans to test and rapidly figure out what recruitment campaigns work best: is it better to have a solidarity campaign that reminds individuals of the importance of helping each other? Or is it more effective to just be upfront and directly ask for participation? The automation aspect of bots mean that we could use them to massively probe different recruitment campaigns, and not have humans spend too much time in these tedious tasks.

These ideas about how task automation could help activists lead us to design Botivist, a system that uses online bots to recruit humans for activism. The bots also allow activists to easily probe different recruit strategies.

Overview of Botivist, a system that automates the recruitment of people for activism and allows activists to try different recruitment strategies.

We conducted a study on Botivist to understand the feasibility of using bots to convince people to participate in activism. In particular, we studied whether bots could recruit and make people contribute ideas about tackling corruption. We found that over 80 per cent of the calls to action made by Botivist’s automated activists received at least one response. However, we also found that the strategy the bots used did matter. We were surprised to discover that strategies that work well face-to-face were less effective when used by bots. Messages effective when done by humans resulted sometimes in circular discussions where people questioned whether bots should be involved in activism. Persuasive strategies resulted in general in less responses from people.

Number of volunteers and responses that each strategy triggered. When the bots were upfront and direct they recruited the most participants and prompted the most responses.

The individuals who decided to collaborate with Botivist were individuals already involved in online activism and marketing. They mentioned hashtags and Twitter accounts related to social causes and marketing analytics. It is likely that people linked Botivist to online marketing schemes. Therefore, those who responded to Botivist were the ones who in their communities already engage with such marketing agents, it was perhaps more natural for them.

To design bots for activists, it is necessary to understand first the communities in which the bots are being deployed. If we want to design bots that can take on some of the more dangerous activities of human activists, we have to first understand how people react when an automated agent conducts now the task. Will it be as effective as when done by a human? Many activists who endanger their lives making timely reports of terrorists or organized criminals are usually very empathic, caring, and have great solidarity with their public. Will it matter when these task are now done by an automated agent who by nature cannot care?

To read more about our system Botivist, checkout our CSCW 2016 research paper:Botivist: Calling Volunteers to Action Using Online Bots,

with Saiph Savage, Andres Monroy-Hernandez, Tobias Hollerer.

Points/talking bots: “Activist Bots: Helpful But Missing Human Love?” is a contribution to a weeklong workshop at Data & Society that was led by “Provocateur-in-Residence” Sam Woolley and brought together a group of experts to get a better grip on the questions that bots raise. More posts from workshop participants talking bots:


‘Making Europe Great Again,’ Trump’s online supporters shift attention to the French election

a graphic showing the Eiffel Tower, French flag and frogs

The online movement that played a key role in getting Donald Trump elected president of the United States has begun to spread its political influence globally, starting with crossing the Atlantic Ocean. Among several key elections happening in 2017 around Europe, few are as hotly contested as the race to become the next president of France. Having helped install their man in the White House in D.C., a group of online activists is now trying to get their far-right woman, Marine Le Pen, into the Élysée Palace in Paris.

A French adaptation of a common Trump-backers’ meme: Pepe the Frog as Marine Le Pen.

A French adaptation of a common Trump-backers’ meme: Pepe the Frog as Marine Le Pen. LitteralyPepe/reddit

In 2016, a group of online activists some might call trolls — people who engage online with the specific intent of causing trouble for others — joined forces on internet comment boards like 4chan and Reddit to promote Donald Trump’s candidacy for the White House. These online rebels embraced Trump’s conscious efforts to disrupt mainstream media coverage, normal politics and public discourse. His anti-establishment message resonated with the internet’s underground communities and inspired their members to act.

The effects of their collective work, for the media, the public and indeed the country, are still unfolding. But many of the same individuals who played important roles in the online effort for Trump are turning their attention to politics elsewhere. Their goal, one participant told Buzzfeed, is “to get far right, pro-Russian politicians elected worldwide,” perhaps with a secondary goal of heightening Western conflict with Muslim countries.

Our research has focused on studying political actors and citizen participation on social media. We used our experience to analyze 16 million comments on five separate Reddit boards (called “subreddits”). Our analysis suggests that some of the same people who played significant roles in a key pro-Trump subreddit are sharing their experience with their French counterparts, who support the nationalist anti-immigrant candidate Le Pen.

Finding Trump backers active in European efforts 

The so-called “alt-right” movement, an offshoot of conservatism mixing racism, white nationalism and populism, is fed in part by online trolls, who use 4chan message boards and the Discord messaging app to create thousands of memes — images combining photographs and text commentary — related to political causes they want to promote.

As Buzzfeed reported, they test political images on Reddit to see which get the most attention and biggest reactions, before sending them out into the wider world on Facebook, Twitter and other social media platforms. However, we weren’t clear about how much this actually happened.

We set out to quantify exactly what was happening, how often, and how many people were involved. We started with the subreddit “The_Donald,” one of the largest pro-Trump hubs, and analyzed the activity of every Reddit username that had ever commented in that subreddit from its start in 2015 until February 2017. We looked specifically for those same usernames’ appearances in European-related “sister subreddits” — as recognized by “The_Donald” users themselves.

a bar graph showing activity of The_Donald subreddit participants

We found that of the more than 380,000 active Reddit users in “The_Donald,” over 3,000 of them had indeed participated in one or more of the “sister subreddits” supporting right-wing candidates in European elections, “Le_Pen,” “The_Europe,” “The_Hofer” and “The_Wilders.” The first two had the most involvement from people also active in “The_Donald.” This is admittedly a small percentage of participants, but it shows that there is overlap, and that the knowledge and techniques used to support Trump are making their way to Europe.

What are they up to?

Next we looked at how involved these Trump-supporting users were in the European right-wing discussions, based on how many comments a user made in any of the subreddits. Most users were moderately active, as might be expected of casual users exploring issues of personal interest. But we identified several accounts with behavior that suggested they might be actively organizing ultra-right collective action in the U.S. and Europe.

Activity of users involved in The_Donald and at least one European right-wing subreddit

There were three types of these users: People who were actively involved in European efforts, bots making automated posts and people concerned about global influences.

The activists

Two outlier accounts in particular were what we called “Ultra-Right Activists.” They commented heavily on “The_Donald” and the four European subreddits — one of the outliers had more than 2,500 comments in “The_Donald” and over 1,000 comments in the “Le_Pen.” The other outlier had over 1,000 comments on “The_Donald” and over 1,000 across the European subreddits.

These accounts actively called people to action in both the U.S. and European subreddits. For instance, one post in “Le_Pen” recruited people to make memes: “Participate in the Discord chat to help us make memes.” Another post sought to organize Americans and Europeans to work together to create propaganda that would be effective in France: “We still have to explain to the Anglos some things about French politics and candidates so that they can understand. We must translate/transpose into the French context the memes that worked well in the U.S.”

We also found plans of flooding Facebook and Twitter with ultra-right content: “Yep, the media call them ‘la fachosphère’ (because we’re obviously literal fascists, right), and it dominates Twitter. That’s a great potential we have there. Soon I’m making an official account to retweet all the subs’ best content to them and make it spread.”

Not all of their efforts were necessarily successful. For example, an effort to transfer Trump’s main campaign slogan to Europe never really got going.

One comment we found on “The_Donald” appeared to lay out a game plan: “PHASE 1: MAGA (Make America Great Again) PHASE 2: MEGA(Make Europe Great Again).” Another sounded a similar theme: “Once we get the ball rolling here we will Make Europe Great Again. Steve Bannon has already been deployed to help Marine Le Pen, we haven’t forgotten about Europe.”

But we found only 210 comments mentioning “Make Europe Great Again” across the four European subreddits. While people on “The_Donald” seemed excited about spreading the phrase, Europeans didn’t go for it. Maybe the fact that the phrase is in English didn’t click well with Europeans.

The bots

This group involved accounts who were moderately involved in both “The_Donald” and the European subreddits. While many of them were undoubtedly real people, some accounts in this group behaved like bots, posting the same comment repeatedly, or even including the word “bot” in their account names.

Just as we don’t know the real identities or locations of the humans who posted, it’s not clear who might have been running the bots, or why. But these bot-type messages were posted in both The_Donald and the European subreddits. They seemed to be used as a way to create silly or fun collaborations between Americans and Europeans, and to spread an ultra-right-wing view of certain world events.

Some of the words most commonly used by people in this group were “news,” “fake” and “CNN.” People seemed to use those words to criticize traditional news media coverage of the ultra-right. However, some people also commented about possibly manipulating the big news channels to get coverage for Le Pen similar to Trump’s strategies:

“So we must get Le Pens (sic) name in the news every damn day. Just the (sic) like the MSM [mainstream media] couldn’t ignore Donald here, they will have to give her air time which will help her reach the disenfranchised.”

The anti-globalists

A third group of accounts were highly active on “The_Donald” but far less so on the European subreddits we examined. When they did join the European discussions, it was usually to discuss how European and U.S. liberals were around the globe ruining everyone and everything. People in this cluster appeared to participate in the European subreddits primarily to emphasize the potentially negative actions that liberals had orchestrated.

With the French election still weeks away, any effects these people might be having remain unclear. But it’s worth watching, and seeing where these activists turn their attention next.


“Countering Fake News in Natural Disasters via Bots and Citizen Crowds”

By Tonmona Roy

a graphic titled 'Countering Fake News During Natural Disasters'

On September 20, 2017, Mexico City was hit with a 7.1 magnitude earthquake, killing hundreds of people. The death toll started rising quickly, with people trapped under the debris of the fallen buildings. When there is a catastrophe of this magnitude, it is hard for the government to quickly assist everyone.  Many started using social media to spread news about trapped people and supplies needed. Among the social media platforms, Twitter became the main site for exchanging information and mobilizing citizens for action. People started using hashtags to learn about what was happening in their neighborhoods and direct actions they could take to help. Some of the most popular hashtags used were #AquiNecesitamos (#HereWeNeed), #Verificado19S (#Verified19S, [19S represents September 19th, the day of the earthquake]). With these hashtags people started to post what they needed and where to deliver it.

However, misinformation started spreading. Some citizens, e.g., started tweeting and calling for help for a doctor allegedly trapped in a building.

But Dr. Elena Orozco, her friends and family all suddenly started reporting on social media:

“…Elena Orozco is not trapped in any building. She is right here with us. She was trying to rescue her co-workers, who were the ones trapped in the building. We are actually still missing Erik Gaona Garnica who decided to go back into the building to get his computer…”

Systems for Countering-Fake News Stories

Given that Fake News was critically affecting the rescue and well being of people we decided to do something about it. We quickly realized that Codeando Mexico (a social good startup) and universities across Mexico, such as UNAM, were organizing crowds of citizens to build civic media to help the earthquake. Our research lab (The HCI lab at West Virginia University) thus decided to unite forces and in a weekend we had rapidly built together a large scale system to counter fake news and bring verified news about the earthquake.

This led us to decide to bootstrap on existing social networks of people to solve the cold-start problem. Through our investigations, we identified that citizens had put together a Google Spreadsheet where they were posting news reports about the earthquake that were 100% verified (they had a group of people on the ground who actively verified each news report.) The group would then manually post on their social media accounts the verified news from the spreadsheet. But, as the group became more popular, it was hard for the volunteers to spend more time on it and coordinate.

Bootstrapping Bots on Networks of Volunteers

Our second design focuses on automating some of the critical bottlenecks that these networks of volunteers experienced when verifying news. In our interviews, we identified that it was difficult for volunteers to differentiate fake and real news because it involved gathering all of the facts behind the story; and it was also a pain to share on social media the news. Our second platform therefore introduced the idea of leveraging citizen crowds and bots (such as our bot @FakeSismo). Bots help in the verification of news by gathering facts and then massively sharing the verified news stories on social media, along with an automatically generated image macro that helps to give more visibility to the story. In this way, human volunteers can focus more on verifying the information. The work flow of our system is as follows:

a graphic that shows the system's workflow: 1. Volunteers report news. 2. The network of volunteers verify the news reports. 3. Bots take the verified reports and distribute on social media and with influencers who can decide what to post.

Bots in Action

To test out our bot, it started by tweeting verified information about the resources needed and it got very good responses. The bot currently has 176 followers and it’s increasing. As an example, the bot posted a news report about needing certain resources and someone started engaging with the bot, saying they had a refrigerator to give away. The bot focused on distributing the information and connecting the citizen who could use the refrigerator.

We also saw that citizens tried to actively verify news reports along with the bot.

In short, our bot is working together with a group of enthusiastic volunteers and helping in gathering and distributing verified information. As we test out more of the bot, we hope to connect with a larger mass of people to start a platform that can counter-fake news during natural disasters.

Social Media, Civic Engagement, and the Slacktivism Hypothesis: Lessons from Mexico’s “El Bronco”

Does social media use have a positive or negative impact on civic engagement? The cynical “slacktivism hypothesis” holds that if citizens use social media for political conversation, those conversations will be fleeting and vapid. Most attempts to answer this question involve public opinion data from the United States, so we offer an examination of an important case from Mexico, where an independent candidate used social media to communicate with the public and eschewed traditional media outlets. He won the race for state governor, defeating candidates from traditional parties and triggering sustained public engagement well beyond election day. In our investigation, we analyze over 750,000 posts, comments, and replies over three years of conversations on the public Facebook page of  “El Bronco.” We analyze how rhythms of political communication between the candidate and users evolved over time and demonstrate that social media can be used to sustain a large quantity of civic exchanges about public life well beyond a particular political event.

Read more about our research: here

Spanish Version of Paper: here

a graphic that shows a politician celebrating victory

Visualizing Targeted Audiences

a graphic that shows the distributions of social queries

Users of social networks can be passionate about sharing their political convictions, art projects or business ventures. They often want to direct their social interactions to certain people in order to start collaborations or to raise awareness about issues they support. However, users generally have scattered, unstructured information about the characteristics of their audiences, making it difficult for them to deliver the right messages or interactions to the right people. Existing audience-targeting tools allow people to select potential candidates based on predefined lists, but the tools provide few insights about whether or not these people would be appropriate for a specific type of communication. We have introduced an online tool, \textit{Hax}, to explore the idea of using interactive data visualizations to help people dynamically identify audiences for their different sharing efforts. We are providing the results of a preliminary empirical evaluation that shows the strength of the idea and points to areas for future research.