KNC13 OPENGOV | OPEN BS DETECTOR: Baffle-speak sensing and contextualization | Knight News Challenge

OPEN BS DETECTOR | Baffle-speak sensing and contextualization

Short URL: http://j.mp/skknc13obsd

Twitter: @OpenBSDetector

Public officials too often rely on rote phrases devoid of real meaning (baffle-speak) when answering questions of public interest. Open BS Detector spots the sound bites, logs, tracks and alerts about them, and offers useful information and context.

THE TASK
Citizens and journalists need to be able to more quickly and easily discern fact from fiction or misdirection in statements by their politicians and public officials, put those statements in context, and get accurate facts to make informed decisions and take action.

SOLUTION
Open Baffle-Speak Detector will parse those comments and answers to questions, compare them to a historical record by the individual on a topic, and return statistical information, as well as verified facts to the individual seeking a better understanding of a subject and an official’s approach to it.

IMPLEMENTATION
Open Baffle-Speak Detector will have three modes to enable people to identify and contextualize the accuracy, veracity and utility of officials’ statements:

  1. A browser plug-in for use on news articles, online transcripts and other text or Web pages.
  2. Voice-recognition/transcription and a Shazam-like mode for real-time assessment.
  3. A photo/facial recognition and/or augmented reality function that enables the user to easily identify the individual, their track record and overall baffle-speak score.

Our preliminary research discussing this idea among people in the open government movement, journalism, and technologists has been met with enthusiasm. The technology to do this exists or is in development in other arenas, and simply needs to be brought together for an end-to-end, easy experience rather than a patchwork that results in a laborious series of tasks, or a lack of capability in this particular sphere. We plan to collaborate with as many developers of existing projects, tools and technologies as possible.

SIMILAR PROJECTS AND COLLABORATION OPPORTUNITIES
The Truth Goggles, Lazy Truth, Super PAC App, and Churnalism projects are a few examples of similar ideas with different applications and approaches. They focus on parsing text or standardized data. None handle live, real-time input. Open Baffle-Speak Detector will stitch these approaches together for a robust, on-demand tool that works in live situations.

CURRENT STATUS
We have not commenced development on Open Baffle-Speak Detector, but have had initial contacts with people working on other and related projects. We will collaborate and build on Truth Goggles, Hyperaudio and other projects as much as possible to avoid duplicating work.

TEAM
Saleem Khan: Project leader, journalist [editor and reporter, ex- CBC, Metro International, Toronto Star newspapers; chairman/director, Canadian Association of Journalists]; advisor, University of Toronto ThingTank Lab [Faculty of Information]; founder, invstg8.net.

Collaborators & Advisors:
M. Boas: OpenNews Fellow 2012. Hyperaudio leader, jPlayer HTML5 media library project coordinator, open Web developer.
L. Gridinoc: OpenNews Fellow 2012. Creative technologist specializing in computational linguistics, semantic Web, and visual analytics.
P. Hunter: Over 20 years designing productive interactions between people and technology; expertise in speech recognition, software tools, and education; Fellow in the Leading by Design program at California College of the Arts; veteran of three start-up businesses; currently at Microsoft.
K. Kaushansky: Two decades specializing in speech recognition, voice user interface design, interactive audio experiences, speaker verification, and voice biometrics at startups and global technology firms, including Nortel and Microsoft. Currently at Jawbone.
K. Khan: User experience strategist and designer consulting to governments and Global 1000 corporations, OCAD University sLab advisor; leader of UXI, Canada’s largest UX professionals group.
H. Leson: Director of community engagement, Ushahidi; open source community developer, library and information technician.
M. Saniga, CA: Co-founder, near-realtime business intelligence/data insight generation software firm Quant Inc.; former finance director and manager at Cara, Dell.

FUNDING AND TIMELINE
We anticipate that Open Baffle-Speak Detector will gain interest and uptake among civic development and open government foundations, news organizations, and real-time intelligence companies and investors which would continue to fund development and custom or applications-specific versions.

ONE SENTENCE SUMMARY
Open Baffle Speak Detector is a tool that empowers citizens to identify when public officials give rote vs. real answers in their statements, and adds verified factual context.

LOCATION
Toronto, Ontario, Canada

First Sources Video: A secured transparency platform for video

 

The Knight-Mozilla News Technology Partnership is looking for ideas on how to reinvent journalism, especially on the Web. To that end, they’re running a series of challenges. The first focuses on “unlocking video“:

 

Video is a central part of many people’s daily news experience. But most online video is still stuck in a boring embedded box, like “TV on a web page,” separated from the rest of the page content. This offers little in the way of context or opportunities for viewers to engage more deeply.

New open video tools make it possible to pull data from across the web right into the story. Information related to the video can literally “pop” into the page. And videos themselves can change, dynamically adapting as stories evolve. The challenge is to use these tools in ways that serve the story. How can we enrich news video through things like added context, deeper viewer engagement, and the real time web? What are the untapped possibilities inherent in many-to-many, web video?

Here is my entry:

FIRST SOURCES VIDEO: A secured, open platform for crowdsourced, trusted, pseudonymized and anonymized video.

Summary

First Sources is a secured, transparent video dissemination system that can be deployed in any locale, to any degree of granularity to free not only journalists, citizens, whistleblowers and other people of conscience to act in the public interest, but governments and other institutions as well.

First Sources will enable people and organizations to securely submit video and information anonymously or pseudonymously in real-time or asynchronously from any device, to an openly accessible platform so journalists and citizens can collaborate to surface public-interest information on-demand, or that may otherwise remain submerged.

The core of this system would be the ability to transmit anonymized or pseudonymized video securely while enabling participants to issue alerts for supply and demand of geolocated news.

Later phases of the implementation would apply either real-time machine or crowdsourced translation and subtitles, and make it possible for journalists and the public to collaborate around video objects. This collaboration would include but not be limited to real-time discussion, remixing and creating contextual narrative using other openly available online resources such as status updates, knowledge resources such as Wikipedia, online news and more.

First Sources’ initial phase or iteration would be primarily enabled by Tor or a similar technology, HTML 5 video, Popcorn and Butter.


Description

The partnerships between established and credible news organizations and the whistleblowing documents publisher WikiLeaks have dramatically reminded us of the power of documentary evidence to enable journalists to tell stories that alert and inform the citizens in a democratic society of how their public institutions operate — and of the news they don’t see. The bulk of this material is text, with notable exceptions such as the Collateral Murder video.

Waiting days, weeks, months or years for troves of text to be released poses a problem not only for dissemination of news and information needed in the present, but for consumption, comprehension and action: Humans are visual creatures.

One need only look at the movements for change boiling up across the Arab world to see the power of information, networked communications and bearing witness in person or from afar. Video is a key part of this equation.

The risk to those who would supply this video, real-time or short-term reportage and information is great.

First Sources is a secured, transparent video dissemination system that can be deployed in any locale, to any degree of granularity to free not only journalists, citizens, whistleblowers and other people of conscience to act in the public interest, but governments and other institutions, too.

First Sources will enable people and organizations to securely submit video and information anonymously or pseudonymously in real-time or asynchronously from any device, to an openly accessible platform so journalists and citizens can collaborate to surface public-interest information on-demand, or that may otherwise remain submerged.

Similarly, enlightened governments and other institutions could use such a platform to proactively release video and information to create and sustain an atmosphere of public transparency. By doing so, citizens could anonymously or pseudonymously retrieve the released video without fear of being monitored and its potential consequences.

By combining and automating the secured identity anonymizing/pseudonymizing function within the system, it helps to ensure that journalists, witnesses, whistleblowers or users of that video receive the maximum possible identity protection and minimizes the potential for reprisals.

Once deployed, the system would be openly accessible by members of the public, or a journalist could give a source a dynamically generated invitation key. This would also provide a secure channel for sources and journalists to communicate with each other.
Alerts for supply and demand of geolocated news would make it possible for journalists and the public to collaborate around video objects in real-time discussion, remix, and contextual federated narrative.

A virtual currency or scrip exchangeable across publishers using the platform could reward the public for contributing.

First Sources would bring global and national scale video-based transparency down to the state, provincial, city or even town or community level. The same kind of transparency enabled by international and national news organizations reporting on openly available original source video would be available to anyone at any level.