(3.147.73.147)
Users online: 12617     
Ijournet
Email id
 

MEDIA WATCH
Year : 2021, Volume : 12, Issue : 2
First page : ( 197) Last page : ( 207)
Print ISSN : 0976-0911. Online ISSN : 2249-8818.
Article DOI : 10.15655/mw/2021/v12i2/160146

Testing ‘Crowdcoding’ methods in sub-saharan african settings: Using the 2020 tanzanian elections to test its validity and reliability

Gondwe Gregory1*, Some Evariste2

1Department of Journalism, University of Colorado-Boulder, USA

2Candidate of Computer Science, University of Colorado-Boulder, USA

*Correspondence to: Gregory Gondwe, College of Media, Communication, and Information (CMCI), Department of Journalism, University of Colorado Boulder, 1511 University Avenue, USA

Gregory Gondwe is an affiliate of the Department of Journalism at the University of Colorado-Boulder, USA where he teaches media-related courses. His research explores the concepts of persuasive messages and their effects on the news. He examines how individuals and groups choose to assimilate and accept news content as either true or false with a particular interest in Sub-Saharan Africa. His current research explores the Chinese news agenda in African media systems.

Evariste Some is a Ph. D. candidate of Computer Science at the University of Colorado-Boulder, USA. His current research focus is on multiple-input multiple-output or massive MIMO and beamforming

Online published on 30 May, 2021.

Abstract

This study replicates existing research on crowdcoding, and content analysis approaches to test the validity and reliability of content analysis methods in the African setting. We use data from the 2020 Tanzanian presidential elections as a case study. Instead of MTurk for crowdsourcing, the study utilized WhatsApp groups and university students from Tanzania to code the data. Using a collected and controlled sample of 400 tweets to represent Tanzania's ruling and opposition parties, respectively, our overall findings suggested that crowdcoding produced more reliable data than qualitative content analysis (QCA). However, further analysis suggests that although Crowdcoding recorded higher agreement on validity scores, trained coders seemed to provide more reliability accuracy scores. Besides, data indicates that the traditional training of the coders was statistically insignificant in providing accurate validity and reliability scores for QCA.

Top

Keywords

Crowdcoding, Crowdsourcing, Sentiment analysis, Content analysis, Tanzania, Sub-Saharan Africa.

Top

 
║ Site map ║ Privacy Policy ║ Copyright ║ Terms & Conditions ║ Page Rank Tool
750,408,323 visitor(s) since 30th May, 2005.
All rights reserved. Site designed and maintained by DIVA ENTERPRISES PVT. LTD..
Note: Please use Internet Explorer (6.0 or above). Some functionalities may not work in other browsers.