IMPORTANT NOTE: This feature is only available to Check workspace Admins.

As a security measure, you first need to click the gear icon (1), select the Data tab (2) and click the link to fill out your data request form (3).

Once data access is approved, you can then access your data by navigating to the workspace menu (1), selecting the ‘Data’ tab (2), and clicking the blue button to view your data report (3).

This opens downloadable data tables and charts in a new window.

Below are descriptions of the summary data sheet:

Week of

All data is grouped per week. Each week is a row.

Conversations details

Conversations

The number of times users have interacted with the bot. A conversation ends in three ways:

  • A bot resource is consumed by the user.

  • The user submitted a fact-check query.

  • The user remains inactive for more than 15 min.

If a conversation ends, any new activity by the user on the tip-line will start a new conversation.

Unique users

All users who started a conversation that week. If a users started tow or more conversations, she is only counted once per week.

Resources sent

The number of conversation that ended with a bot resource sent back to the user.

% Resources sent

Percentage of conversation ending with a resource sent.

No action after conversation initiated

The number of conversations that resulted in neither a bot resource sent or the user choosing the option to send a fact-check. If a user sends content to the tipline but does not choose the option to fact-check, it counts as no action.

% No action

Percentage of conversation ending with no action

All tipline Submissions

The total number of submitted queries, regardless of duplicates, before any triage.

% Submissions

Percentage of conversation ending with a submission, valid or not (the user chose the option to submit a query).

Submission type

Important: Submissions can only have one of the following types at a time.

All submissions = Trashed submissions + Exact matches + machine confirmed + machine suggested + Human confirmed + New submissions

Trashed submissions

Submissions that have been sent to the trash manually, via an automated rule, or by matching an already trashed submissions.

Exact matches

Submissions that are exact duplicates of existing submissions:

  • Identical URLs

  • Identical uploaded images

  • Identical uploaded videos

  • Identical uploaded sound files

  • Identical text items

Machine confirmed similar

Submissions that have been found similar to existing submissions by Check's algorithm, and listed as 'Confirmed similar' in Check. Currently this only includes image and text submissions.

Machine suggested similar

Submissions that have been flagged as probably similar to existing submissions by Check's algorithm, and listed as 'Suggested similar' in Check. Currently this only includes image and text submissions.

  • If a suggested similar media is accepted, the data will update and it will be listed as Machine confirmed similar.

  • If a suggested similar media is rejected, the data will update and it will be listed as New submission.

Therefore, the ideal number in this column is 0 – which means that all algorithmic suggestions have been reviewed.

Human confirmed similar

Submissions that have been manually added to existing items as similar, and listed as 'Confirmed similar' in Check.

New submissions

Submissions that are none of the above. Not similar, suggested, or trashed.

Total similar submissions

Total number of submissions that are Exact match, Machine confirmed similar and Human confirmed similar. This does not include Machine suggested similar.

% Similar submissions

Percentage of similar submissions in Total submissions

Reports published

The number of new reports that have been published.

Reports sent

The total number of report sent. One report can be sent to several users if multiple users have submitted an exact match or confirmed similar. The number of report sent is equal to the number of people who requested a fact-check.

User per report

The average number of users served per report published. The higher the number, the more efficient the work is. User per report = Report sent / Report published.

Median hours to respond

The median number of hours passed between a submission is received and the user who submitted receives a report.

Instantaneous reports sent

The number of report that have been sent to users within 10 min.

% of Instantaneous reports

The ratio of reports that were sent back to users in less than 10 minutes.

Did this answer your question?