ROGER DO FOUNDER, QSEARCH
To truly extract value from big data, information needs to be parsed, sifted through, and allocated with a value for it to have meaning to our lives. Many media consultancies, market researchers, and business analysts use such monitoring tools to separate quality data from noise, but things get a little trickier when feelings are involved. Are discussions with traction meaningful, simply because people are commenting on it? Are these positive vibes a sign of collective sarcasm, or is the negative discourse a much-needed look in the mirror? Can Artificial Intelligence even begin to help policymakers and researchers quantify emotive language around social affairs of today?
We spoke with Roger Do, founder of QSearch, a social media intelligence company that helps quantify and analyse social media conversations through their social listening tool.
How is QSearch different from other social media monitoring tools out there?
ROGER: Our most unique feature is a behavioural index where we rank content based on readers’ reactions, rather than a post’s shares or likes. This allows our users to quickly identify powerful content, rather than wide-spread content, to make impactful decisions. The below graph shows Facebook data from March 2020, with two notable anger spikes that are both abnormal and above the post content. The first one is regarding a Muslim pilgrim gathering (during a Covid-19 outbreak) in Indonesia. The second one is a TODAY post regarding nightclub closures.
What is interesting is that both TODAY’s post and CNA’s posts have nearly the same number of shares, but the TODAY article has nearly 60% more anger. This kind of weightage allows our clients to make rapid and impactful decisions.
Could QSearch share more about how it determines if a Facebook post is ‘viral’?
ROGER: We look at two main metrics: the number of shares over a specific time, and the velocity of anger over time.
There is a high correlation with anger over time with sharing speed. After all, sharing is triggered by emotions, not just habits. For COVID alerts to the governments, when either condition is met, it sends out an email to let the government staff decide if action is needed. Fast sharing content or content-generating, fast-rising anger needs to be managed during sensitive times.
Can QSearch identify misinformation online?
ROGER: Determining misinformation is really beyond what machine-learning can accomplish. If there is a decent system that does that, what we’re offering to the government would not be necessary.
How does QSearch ensure that the service shares an accurate analysis of sentiment?
ROGER: We analyse individual choices to help our clients have digital empathy with the people they are interested in.
In the commercial context, we help businesses understand why consumers care about their products and services. For the government, it’s usually about understanding how their message is being perceived and how they can improve their credibility.
Which countries in Southeast Asia are more susceptible to misinformation?
ROGER: Ironically, countries with an over-saturation of social media like Taiwan and the Philippines are less susceptible to major influence operations because social media in Taiwan and the Philippines is an overlay on top of real relationships. For Taiwan, it’s friends from school and work. In the Philippines, it is the family. The real-world connection offers some immunity to influence operations. In countries where social isolation is the norm, like the US, we see misinformation as being severely rampant.
How can we use technology to mitigate or curb misinformation shared on Dark Social channels like WhatsApp and Telegram?
ROGER: We are very wary of technical intrusion into private spheres, and private Instant Messaging (IM) channels should be treated as such.
A simple solution to prevent private IM technology being used for misinformation would simply be to limit the size of a group and the number of groups a natural person can have. This will also make it easier to monetise for the platform since people now have to place a value on the channel and membership.
Persistent misinformation exists because digital bandwidth is abundant, and users are not making good choices in which channels to be in. This is why all the scams use free platforms because they don’t have to use an identity-bound payment method.