PyData Berlin 2018
In this talk, I will discuss how to address some of the most likely causes of failure for new Natural Language Processing (NLP) projects. My main recommendation is to take an iterative approach: don't assume you know what your pipeline should look like, let alone your annotation schemes or model architectures.
---
www.pydata.org
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.
PyData conferences aim to be accessible and community-driven, with novice to advanced level presentations. PyData tutorials and talks bring attendees the latest project features along with cutting-edge use cases.
0:00 - Introduction
2:21 - NLP projects are like start-ups
6:12 - Machine Learning Hierarchy of Needs
11:45 - Making modeling decisions that are simple, obvious, and wrong (Problem #1)
17:05 - Compose generic models into novel solutions (Solution #1)
19:05 - Workflow #1
22:35 - Big annotation projects make evidence collection expensive (Problem #2)
24:00 - Run your own micro-experiments (Solution #2)
27:38 - It is hard to get good data by boring out underpaid people (Problem #3)
28:43 - Smaller teams, better workflows (Solution #3)
S/o to https://github.com/stobinaator for the video timestamps!
Want to help add timestamps to our YouTube videos to help with discoverability? Find out more here: https://github.com/numfocus/YouTubeVideoTimestamps