Sunday, June 19, 2016

IBM Watson at NAACL 2016

There were several Twitter NLP flare-ups recently triggered by the contrast between academic NLP and industry NLP. I'm not going to re-litigate those arguments, but I will note that one IBM Watson question answering team anticipated this very thing in their current NAACL paper for the NAACL HLT 2016 Workshop on Human-Computer Question Answering.

The paper is titled Watson Discovery Advisor: Question-answering in an industrial setting.

The Abstract
This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.
The paper's topic is not surprising given that four of the authors are PhDs (Charley, Graham, Allen, and Kristen). Hence, it was largely a group of fishes out of water: they had an academic bent, but are daily wrestling with the real-word challenges of paying-customers and very messy data.

Here are five take-aways:

  1. Real-world questions and answers are far more ambiguous and domain-specific than academic training sets.
  2. Domain tuning involves far more than just retraining ML models.
  3. Useful error analysis requires deep dives into specific QA failures (as opposed to broad statistical generalizations).
  4. Defining what counts as an error is itself embedded in the context of the customer's needs and the domain data. What counts as an error to one customer may be acceptable to another.
  5. Quiz-Bowl evaluations are highly constrained, special-cases of general QA, a point I made in 2014 here (pats self on back). Their lesson's learned are of little value to the industry QA world (for now, at least).

I do hope you will read the brief paper in full (as well as the other excellent papers in the workshop).

Monday, January 25, 2016

Genetic Mutation, Thick Data, and Human Intuition

There are two stories trending heavily in my social network sites that are seemingly unrelated, yet they share one obvious conclusion: the value of human intuition in finding needles in big data haystacks. Reading them highlighted to me the special role humans must still can play in the emerging 21st century world of big data.

In the first story, The Patient Who Diagnosed Her Own Genetic Mutation—and an Olympic Athlete's, a woman with muscular dystrophy sees a photo of an Olympic sprinter’s bulging muscles and thinks to herself, “she has the same condition I do.” What in the world would cause her to think that? There is no pattern in the data that would suggest this. The story is accompanied by a startling picture of two women who, at first glance, look nothing alike. But once guided by the needle in the haystack that this woman saw, a similarity is illuminated and eventually a connection is made between two medically disparate facts that, once combined, opened a new path of inquiry into muscle growth and dystrophy that is now a productive area of research. Mind you, no new chemical compound was discovered. No new technique or method that allowed scientists to see something that couldn’t be seen before was built. Nope. Nothing *new* came into being, but rather a connection was found between two things that all the world’s experts never saw before. One epiphany by a human being looking for a needle in a haystack. And she found it.

In the second story, Why Big Data Needs Thick Data, an anthropologist working closely to understand the user stories of just 100 Motorola cases discovers a pattern that Motorola’s own big data efforts missed. How? Because his case-study approach emphasized context. Money quote:
For Big Data to be analyzable, it must use normalizing, standardizing, defining, clustering, all processes that strips the the data set of context, meaning, and stories. Thick Data can rescue Big Data from the context-loss that comes with the processes of making it usable.
Traditional machine learning techniques are designed to find large patterns in big data, but those same techniques fail to address the needle in the haystack problem. This is where humans and intuition truly stands apart. Both of these articles are well worth reading in the context of discovering the gaps in current data analysis techniques that humans must fill.

UPDATE: Here's a third story making a similar point. a human being using an automatically culled dictionary noticed a misogynist tendency in the examples it provided. A rabid feminist writes

And here's a fourth: Algorithms Need Managers, Too. Money quote: "Google’s hard goal of maximizing clicks on ads had led to a situation in which its algorithms, refined through feedback over time, were in effect defaming people with certain kinds of names."

Sunday, January 10, 2016

Advice for linguistics grad students entering industry

At the LSA mixer yesterday I had the chance to chat with a dozen or so grad students in linguistics who were interested non-academic jobs. Here I'll note some of the recurring themes and advice I gave.

The First Job
Advice: Be on the look-out and know what a good opportunity looks like.

Most students were very interested in the jump. How do you make that first transition from academics to industry? In general, you need to be in the market, actively looking, actively promoting yourself as a candidate. For me, it was a random posting on The Linguist List that caught my eye. In the summer of 2004 I was a bored ABD grad student. I knew I wasn't going to be competitive for academic jobs at that point, so I checked The Linguist List job board daily. One day I saw a posting from a small consulting company. They were looking for a linguist to help them create translation complexity metrics. They listed every sub-genre in linguists as their requirements. This told me they really didn't know what they wanted. I saw that as an opportunity because I could sweep in and help them understand what they needed. I applied and after several phone calls I was asked to create a proposal for their customer. I had a conference call to discuss the proposal (I was in shorts and  a t-shirt in an empty lab during the call, but they didn't know that). Long story short, I got the job*, moved to DC and spent about two years working as a consultant on that and other government contracts. That first job was a big step in moving into industry. I had very impressive clients, a skill set that was rare in the market, and a well defined deliverable that I could point to as a success.


Visibility
Advice: Make recruiters come to you. Maintain a robust LinkedIn profile and be active on the site on a weekly basis (so that recruiters will find you).

Several students wondered if LinkedIn was considered legitimate. I believe it's fair to say that within the tech and NLP world, LinkedIn is very much legit. My LinkedIn profile has been crucial to being recruited for multiple jobs, two of which I accepted. Algorithms are constantly searching this site for all kinds of jobs. In fact, most of the really good jobs for linguists are not posted on job sites, but rather are filled only by recruiter. So you need strategies for waving your flag and getting them to come to you. In the DC area, there are excellent opportunities for linguists at DARPA, CASL, IARPA, NIST, MITRE and RAND, and many other FFRDCs (federally funded research and development centers), but they rarely post these to jobs boards. You need them to find you. A good LinkedIn page is a great way to increase your visibility.

Another way to increase your visibility is to go public with your projects. You can always blog descriptions and analysis. For computer science students, a GitHub account is virtually a requirement. I think linguists should follow their lead. You most likely write little scripts anyway. Maybe an R script to do some analysis, or a Python script to extract some data. Put those up on GitHub with a little README document. That's an easy place for tech companies to see your work. Also, if you have created data sets that you can freely distribute, put those up on GitHub too. I also recommend competing in a Kaggle competition. Kaggle sponsors many machine learning competitions. They provide data, set the requirements, and post results. It's a great way to both practice a little NLP and data science, and also increase your visibility (and put your Kaggle competitions on your resume!). here are two linguistically intriguing Kaggle competitions ready for you right now: Hillary Clinton's Emails (think about the many things you could analyze in those!); NIPS 2015 Papers (how can a linguist characterize a bunch of machine learning papers?).

Have you managed to automate a process that you once did manually (either through an R script, or maybe Excel formulas), write that up on a blog post. Automating manual processes is huge in industry.  You know the messy nature of language data better than anyone else, so write some blog posts describing the kind of messiness you see and what you do about it. That's gold.


Resume
Advice: List tools and data sets. Do you use Praat? List that. Do you use the Buckeye Corpus? List that. Make it clear that you have experience with tools and data management. Those are two areas where tech companies always have work to perform, so make it clear that you can perform that work.



*FYI, here's what the deal was with that first consultant job: The FBI tests lots of people as potential translators. So, for example, they will give a native speaker of Vietnamese several passages of Vietnamese writing, one that is simple, one that is medium complex, and one that is complex); then the applicant is asked to translate the passages into English. the FBI grades each translation. The problem was that the FBI didn't have a standardized metric for what counted as a complex passage in Vietnamese (or the many many other languages that they hire translators for). They relied on experienced translators to recommend passages from work they had done in the past. Turns out, that was a lousy way to find example passages. The actual complexity of passages was wildly uneven, and there was no consistency across languages.

Thursday, January 7, 2016

LSA 2016 Evening Recomendations

With the LSA's annual convention officially underway, I've thrown together a list of a few restaurants and bars within a short walking distance of the convention center that grad students and attendees might want to enjoy. My walking estimates assume you are standing in front of the convention center.

Busboys and Poets (4 blocks west at 5th & K) - A DC Institution. You will not be forgiven if you do not make at least one pilgrimage here.

Maddy’s Taproom (4 blocks east at 13th & L) - Good beer selection.

RFD Washington (4 blocks south at 7th & H) - Large bottled beer selection, good draft beer selection (food ain't that great).

Churchkey (6 blocks northeast at 14th & Rhode Island) - Officially, one of the best beer rooms in the US.

Stan's Restaurant (7 blocks east at L & Vermont) - Downstairs, casual. very strong drinks. Supposedly good wings (I'm a vegetarian, so I hold no opinion)

Daikaya - Ramen - Izakaya (7 blocks Southwest at 6th & G) - Upstairs bar can be easier to get into sometimes. It's a popular place.

Teaism, Penn Quarter (8 blocks south at 8th & G) - Great snack place mid-way to the national Mall. Large downstairs dining area. great place to have some tea, a snack, and catch up on conference planning.

There are, of course, lots of other places within a short walk. I recommend 14th street in general. 9th street has some good stuff, especially as you get closer to U, but it's a little sketchy of a walk.



Sunday, November 29, 2015

online psycholinguistics demos 2015

I was asked recently about an old post from 2008 that listed a variety of online psycholinguistics demos. All of the links are dead now, so I was asked if I knew of any updated ones. This is what I can find. Any suggestions would be welcomed.

  • Harvard Implicit Associations TaskProject Implicit is a non-profit organization and international collaboration between researchers who are interested in implicit social cognition - thoughts and feelings outside of conscious awareness and control. The goal of the organization is to educate the public about hidden biases and to provide a “virtual laboratory” for collecting data on the Internet.
  • webspr - Conduct psycholinguistic experiments (e.g. self-paced reading and speeded acceptability judgment tasks) remotely using a web interface
  • Games With Words: Learn about language and about yourself while advancing cutting-edge science. How good is your language sense?
  • Lexical Decision Task demo: In a lexical decision task (LDT), a participant needs to make a decision about whether combinations of letters are words or not. For example, when you see the word "GIRL", you respond "yes, this is a real English word", but when you see the letters "XLFFE" you respond "No, this is not a real English word".
  • Categorical PerceptionCategorical perception means that a change in some variable along a continuum is perceived, not as gradual but as instances of discrete categories. The test presented here is a classical demonstration of categorical perception for a certain type of speech-like stimuli.
Paul Warren has a variety of demos at the site for his textbook "Introducing Psycholinguistics"


  • McGurk demo

  • Various other demos from Warren's textbook


  • Saturday, November 14, 2015

    Google's TensorFlow and "mathematical tricks"

    TensorFlow is a new open source software library for machine learning distributed by Google. In some ways, this could be seen as a competitor to BlueMix (though much less user friendly). Erik Mueller, who worked on the original Watson Jeopardy system (and has a vested interest in AI with his new company Symbolic AI), just wrote a brief review of TensorFLow for Wired.

    Google’s TensorFlow Alone Will Not Revolutionize AI

    Unfortunately, it's not really a review of TensorFlow itself, but rather makes a general point against statistical approaches, which I basically agree with, but the argument requires a much more comprehensive treatment.

    Some good quotes from the article:

    • "I think [TensorFlow] will focus our attention on experimenting with mathematical tricks, rather than on understanding human thought processes."
    • "I’d rather see us design AI systems that are understandable and communicative."

    Wednesday, June 10, 2015

    The Language Myth - Book Review

    Linguistics professor Vyvyan Evans recently published a new book that has at least one group of linguists in a state of frenzy: The Language Myth: Why language is not an instinct. The book's blurb sums up its content:
    Some scientists have argued that language is innate, a type of unique human 'instinct' pre-programmed in us from birth. In this book, Vyvyan Evans argues that this received wisdom is, in fact, a myth. Debunking the notion of a language 'instinct', Evans demonstrates that language is related to other animal forms of communication; that languages exhibit staggering diversity; that we learn our mother tongue drawing on general properties and abilities of the human mind, rather than an inborn 'universal' grammar; that language is not autonomous but is closely related to other aspects of our mental lives; and that, ultimately, language and the mind reflect and draw upon the way we interact with others in the world
    Evans' grounds his motivation in the idea that there are a variety of false claims about how language works ("myths") deeply rooted in our culture's background knowledge as well explicated in introductory text books. He goes further to claim that these false claims have been pushed by a small number of pre-eminent scholars whose fame and influence have caused these false claims to be taken more seriously than they deserve on their face.

    By all rights, I should be a good audience for this book. I was trained as a linguist in a department that was openly hostile to the language instinct doctrine that this book argues against (see my post about that experience). 

    The book is organized by two principles. First, each chapter starts by stating one false claim and providing a description of why it was proposed as an explanation of how language works. Second, each chapter then deconstructs the myth into component claims and shoots holes in each one. 

    The Good
    Evans does a service to the lay audience by pointing out that that deep divisions exist within the filed of linguistics. Too often non-experts assume a technical field is homogeneous and everyone agrees on the basic theories. This is simply not true of linguistics. 

    Evans also does a service to his audience by stepping through the logic of refutation. His point-counterpoint style can be detailed at times, but I appreciate a book that doesn't treat its readers like third graders (I'm looking at you Gladwell).

    For me, the standout chapter was 5: Is language a distinct module in the mind. This chapter is devoted to neurolinguistics and here Evans is at his sharpest when leading the reader through his point-counterpoint about brain regions and functionality.

    The Bad
    Evans fails to do justice to the myths he debunks. He was accused of creating straw men (and addresses this somewhat in the introduction), but ultimately I have to agree. Evans does not provide a fair description of arguments like poverty of the stimulus

    Evans quickly shows his bias and directly attacks just two people: Noam Chomsky and Steven Pinker (and to a lesser extent, Jerry Fodor). Evans wants to debunk general notions that have crept into the general public's background beliefs about language, but what he really does is rail against two guys. And worse, he often devolves into a detailed point-counterpoint with just one book, Pinkers' 1994 The Language Instinct. Any reader unfamiliar with that book will quickly get drowned by arguments against claims they never encountered. As an exercise, I would recommend Evans re-write this book without a single reference to Chomsky, Pinker, or Fodor. I suspect the result will be a more effective piece of writing. 

    Lest some Chomskyean take this review wrongly, let me be clear: I think Chomsky is broadly wrong and Evans is broadly right. But even though I believe Pinker is wrong and Evans is right, I find Pinker a far superior writer and seller of ideas. And that is a serious problem. 

    Evans would have been better off throwing away the anti-Chomsky rants and simply write his view of how language works. A book on its own terms. Instead he comes across as your drunk uncle at Christmas who can't stop complaining about how the ref in a high school football game 20 years ago screwed him over with a bad call. This might actually be true, but get over it. 

    I feel Evans has taken on too much. Each myth is worth a small book itself to debunk properly. This is partly what leads to the straw man arguments. Efficiency. A non-straw man version of Evans' book would be 3000 pages long and only appeal to the three people in the world who know enough detail about both Chomsky and functionalist theory to properly understand all that detail. So I *get* why Evans chose this style. I just think Pinker is better at it. Ultimately Evans alienates his lay audience by ranting about people they don't know and arguments they are unfamiliar with. 

    A detail complaint: He can be disingenuous with citations. On page 110 he uses the wording "the most recent version of Universal Grammar", but turn to the footnote on 264 and he cites publications from 1981 and 1993. In a book published in 2015, citations from 81 and 93 hardly count as recent. See also page 116 where he cites "a more recent study" that was actually published in 2004 (and probably conducted in 2002).

    I don't want to be critical of a book that argues a position I align with, but I must be honest. This book just doesn't cut it. 


    Sunday, May 17, 2015

    The Language Myth - Preliminary Thoughts

    I started reading The Language Myth: Why Language Is Not an Instinct by Vyvyan Evans. This book argues that Noam Chomsky is wrong about the basic nature of language. The book has sparked controversy and there have probably been published more words in blogs and tweets in response than are contained in the actual book.

    I'm two chapters in, but before I begin posting my review, I wanted to do a post on academic sub-culture, specifically the one I was trained in. I did my (not quite completed) PhD in linguistics at SUNY Buffalo in 1998-2004. The students only half-jokingly called it Berkeley East because, at the time, about half the faculty had been trained at Berkeley (and several others were closely affiliated in research), and Berkeley is one of the great strong-holds of anti-Chomsky sentiment. Buffalo was clearly a "functionalist" school (though no one ever really knew what that meant, functionalism never really being a field, more a culture).

    In any case, we were clearly, undeniably, virulently, anti-Chomsky. And that's the culture I want to describe to provide some sense of how different the associations are with the name "Chomsky" for me (and I suspect Evans), than for non linguists, and for non-Chomskyan linguists.

    So what was it like to be a grad student in a functionalist linguistics department, with respect to Noam Chomsky?

    [SPOILER ALERT - inflammatory language below. Most of this post is intended to represent a thought climate within functionalist linguistics, not factual evidence]

    I never quite drank the functionalist Kool-Aid (nor the Chomskyean Kool-Aid either, to be clear); nonetheless I remain endowed with a healthy dose of Chomsky skepticism.

    Here is how I remember the general critique of Chomsky echoed in the halls of SUNY Buffalo linguistics (this is my memory of ten+ years ago, not intended to be a technical critique; this is meant to give the impression of what the culture of a functionalist department felt like).

    The Presence of Chomsky

    • First, we didn't talk about Chomsky much, he was peripheral. What little we said about him was typically mocking and belittling (grad students, ya know).
    • The syntax courses, however, were designed to teach Chomsky's theories for half a semester, then each instructor was given the second half to teach whatever alternative theory they wanted. For my Syntax I course, we used one of Andrew Radford's Minimalism textbooks (then RRG for the second half). For my Syntax II, we used Elizabeth Cowper's GB textbook (then what Matthew Dryer called "Basic Theory", which I always preferred above all else).
    • We had a summer reading group for years. One summer we read Chomsky’s The Minimalist Program because we felt responsible for understanding the paradigm (we wanted to try to understand the *other*). The group included two senior faculty, both with serious syntax background. 

    The Perception of Chomsky 
    (amongst my cohort, this is what my professors and fellow grad students, and I, thought about the guy. Whether we were accurate or not is another thing)

    • Noam Chomsky is a likable man, for those who get to meet him in person.
    • Chomsky did linguistics a great service by taking linguistics in the general direction of hard science.
    However,
    • Chomsky's ideas have never been accepted by a majority of linguists, if you include semanticists, discourse, sociolinguistics, international linguists, psycholinguists, anthropological linguists, historical linguists, field linguists, philologists, etc. Outside of American syntacticians, Chomsky is a footnote, a non factor.
    • Many of his fiercest critics were former students or colleagues.
    • Chomsky radically changes his theories every ten years or so, simply ignoring his previous claims when they're proved wrong.
    • Chomsky has never made a serious attempt to understand other theories or engage in linguistic debate; he lives in a cocoon.
    • He bases major theoretical mechanisms on scant evidence, often obsessing over a single sentence in a language he himself has never studied, based only on evidence from an obscure source (like a grad student thesis).
    • He condescendingly dismisses most linguistic evidence (like spoken data) with the unfounded distinction between narrow syntax and broad syntax. This allows him to cherry pick data that suits him, and ignore data that refutes his claims.
    • When critiques are presented by serious linguists with evidence, the evidence is discarded as *irrelevant*, the linguists are derided as foolish amateurs, and the critiques are dismissed as naive. But rarely are the points taken as serious debate.
    • Chomsky only debates internal mechanisms of his own theories; anyone who argues using mechanisms outside of those Chomsky-internals is derided as ignorant. In other words, there is only one theoretical mechanism, only one set of theoretical terms and artifacts; only these will be recognized as *legitimate* linguistics. Anything else is ignored. 
    • Chomsky doesn't engage with the wider linguistics community. 
    • Chomsky expects to be taken seriously in a way that he himself would never allow anyone else to be taken seriously: lacking substantial evidence, lacking external coherence, and lacking anything approximating collegiality.
    • Oh, and Chomsky himself hasn't done serious linguistic analysis since the 80s. He has devoted most of the last 30 years to stabbing at political windmills. At most, he spends maybe 10% of his time on linguistics. 

    That’s the image of the man as I recall from the view of a functionalist department devoted to descriptive linguistics. Let the verbal assaults begin!!!

    UPDATE (May 5): This post prompted a spirited Reddit discussion, well word reading.

    Thursday, March 12, 2015

    Jobs with IBM Watson

    IBM Watson is currently recruiting Washington DC area engineers for "Natural Language Processing Analysts". We're looking for engineers who like to build stuff and travel.You can apply through the link, or feel free to contact me if you want more info (use the "View my complete profile" link to the right for my contact).

    Here's the official posting (hint: there is wiggle room)

    Job description
    Ready to change the way the world works? IBM Watson uses Cognitive Computing to tackle some of humanity's most challenging problems - like revolutionizing how doctors research cancer or transforming how businesses engage with their customers. We have an exciting opportunity for a Watson Natural Language Processing Analyst responsible for rigorous analysis of system performance phases including search, evidence scoring, and machine learning.

    Natural Language Processing (NLP) Analysts evaluate system performance, and identify steps to drive enhancements. The role is part analyst and part developer. Analysts are required to function independently to dive deep into system components, identify areas for improvement, and devise solutions. Analysts are expected to drive test and evaluation of their solutions, and empirically identify follow on steps to implement continuous system improvement. Natural Language Processing is an explosively dynamic field; analysts must expect ambiguity, and demonstrate the ability to develop courses of action on the basis of data driven analysis. Must be able to work independently and demonstrate initiative. Demonstrated analytical skills, security clearances preferred but not required.

    We live in a moment of remarkable change and opportunity. The convergence of data and technology is transforming industries, society and even the workplace. New roles are being created that never existed before to meet the demands of this transformation. And IBM Watson is now looking for talent in healthcare, life sciences, financial services, the public sector and others to new roles destined to usher in the next era of cognitive computing. Embark on the journey with us at IBM Watson.
    Required
    • Bachelor's Degree
    • At least 2 years experience in Text Search Engines (such as Lucene)
    • At least 2 years experience in Java Development Proficiency
    • Basic knowledge in Natural Language Processing
    • Basic knowledge in Text Analytics/ Info Retrieval
    • Basic knowledge in Unstructured Data
    • Readiness to travel 50% travel annually
    • U.S. citizenship required
    • English: Fluent
    Preferred
    • Master's Degree
    • At least 5 years experience in Text Search Engines (such as Lucene)
    • At least 5 years experience in Java Development Proficiency
    • At least 2 years experience in Natural Language Processing
    • At least 2 years experience in Text Analytics/ Info Retrieval
    • At least 2 years experience in Unstructured Data
    IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.

    Monday, March 2, 2015

    The Linguistics behind IBM Watson

    I will be talking about the linguistics behind IBM Watson's Question Answering on March 11 at the DC Natural Language Processing MeetUp. Here's the blurb:

    In February 2011, IBM Watson defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge. Today, Watson is a cognitive system that enables a new partnership between people and computers that enhances and scales human expertise by providing a more natural relationship between the human and the computer. 

    One part of Watson’s cognitive computing platform is Question Answering. The main objective of QA is to analyze natural language questions and present concise answers with supporting evidence, rather than a list of possibly relevant documents like internet search engines.

    This talk will describe some of the natural language processing components that go into just three of the basic stages of IBM Watson’s Question Answering pipeline:

    • Question Analysis
    • Hypothesis Generation
    • Semantic Types

    The NLP components that help make this happen include a full syntactic parse, entity and relationship extraction, semantic tagging, co-reference, automatic frame discovery, and many others. This talk will discuss how sophisticated linguistic resources allow Watson to achieve true question answering functionality.