Schedule

Time Name Title 09:00-09:30 Ozan Sener, Amir Zamir Opening — Negative Results: what and why? [slides] 09:30-10:15 Jitendra Malik Computer Vision: a historical perspective [slides] 10:15-10:45 Coffee Break 10:45-11:30 Larry Zitnick Which way forward? AI + vision [slides] 11:30-11:45 (Spotlight) Jiajun Lu, Hussein Sibai, Evan Fabry, David Forsyth No Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles [PDF] 11:45-12:00 (Spotlight) Wei-Lun Chao, Hexiang Hu, Fei Sha Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets [PDF] 12:00-12:30 Pascal Fua Imposing Hard Constraints on Deep Networks: Promises and Limitations [pdf] [slides] 12:30-13:30 Lunch Break 13:30-14:15 Alexei Efros An Unbaised Look at How We Never Learn From Our Mistakes [slides] 14:15-15:00 Coffee Break 15:00-15:45 Dhruv Batra A Tale of Two Negative Results [slides] 15:45-16:30 Panel (Moderator: Rahul Sukthankar) Panelists: Jitendra Malik, Larry Zitnick, Pascal Fua, Alexei Efros, Dhruv Batra, Antonio Torralba

What are we talking about?

Workshop location: 320

Experimental fields typically exhibit a strong bias towards publications with positive results, and computer vision is not an exception. However, negative or inconclusive results are considered fundamental in advancement of the science. A prominent example of a negative result is by the Michelson-Morley experiment. The experiment was originally expected to detect a certain velocity of the earth relative to the postulated luminiferous aether. However, the experiment failed to do so and this negative finding had a crucial contribution in development of special relativity by Einstein.

What is a negative result?

One general form of a positive statement is "idea X works for problem Y". A negative statement has the counter form "idea X does not work and may never work for problem Y". Most of our research findings and publications in computer vision have the former structure, though the latter negative statement is also valuable in steering our mindsets in the right direction (see below for examples from other fields). Although, it is hard to provide a concrete definition of a negative result in computer vision, we think it is important to restrict the definition to a small subset which we can effectively discuss and utilize for reaching useful conclusions. Hence, we give some examples of negative results:

Conclusive (Experimental/Theoretical) Negative Results: Researchers generate a large number of novel and interesting ideas. Only a subset of such ideas work as expected. Unfortunately a lack of incentives in presenting negative results prevents researchers from pushing negative results to a conclusion. Hence, we are interested in novel and interesting ideas that were conclusively demonstrated not to work.

cv

assume

exp

exp

assume

cv

In computer vision, we make many assumptions while designing our algorithms. Some of these assumptions are grounded in observations and some are primarily to make the problem computationally tractable. We generally apply these algorithms to some set of domains to obtain conclusive experimental results. Typically, if the set of images we are interested is D, our problem definition constitutes a smaller domain Dand our experiments are done on even smaller domain D. Generally speaking; D⊂ D⊂ D. Hence, it is an open problem to observe the applicability of results beyond the experimental setup described in the literature. We are interested in negative results that show the limitations of existing approaches on interesting domains.

Negative results on metrics and datasets: Ronald Coase once said "If you torture the data long enough, it will confess (to anything)". It is very valuable to rigorously explore the limitations of metrics and datasets. For example, getting a real problem and showing that a metric or dataset only reaches inconclusive results has a high value for the field. Since various benchmarks do not separate the effect of algorithm and its engineering, it should be interesting to understand whether one was limited by engineering or the algorithm.

High-level lessons from the past: research ideas that once were presumed to be promising but are now considered incorrect. There should be a factor of surprise in such observations and we invite researchers who can explain the reasoning that brought our community to this understanding.

Do we have meaningful negative results in computer vision?

We believe we do. In recent years, we have seen many similar algorithms independently proposed by different researchers showing that we often have similar ideas. Given the fact that many of these ideas fail, disseminating the lessons from such failures will save the community a lot of time and resources. The key is appropriately incentivizing the sharing of conclusive negative results.



One concern could be that in computer vision, there are many parameters that contribute to a problem, and therefore, it is cumbersome to provide conclusive negative results if a particular idea fails. That is because it is computationally intractable to conduct an exhaustive search over the parameter space to identify the particular factors contributing to a given failure. While this concern is partially true, we believe that experimental designs specifically targeted at evaluating a negative hypothesis can alleviate this issue. For instance, by 2013, it had been commonly observed that HOG features failed on certain detection problems; but the HOGgles paper convincingly demonstrated how HOG features were ill-suited to certain tasks.



We are also open to hearing the other side of this argument, i.e., only positive results are valuable, during the panel discussions in the workshop. We specifically plan to invite people who believe that only positive results should be incentivized to participate in our panel discussions. If this brings us agreement that negative results should not be part of computer vision discussions, then at least that will be our first negative result!

Negative Results in Other Fields

Negative results and the way to disseminate them are still rather controversial topic and have not reach a conclusion in many fields. However, certain fields, such as social-sciences and bio-sciences are clearly way ahead in this discussion. They have even developed specialized journals, e.g. Journal of Negative Results in BioMedicine, Journal of Negative Results in Ecology and Evolutionary Sciences , The All Results Journals, and Journal of Articles in Support of the Null Hypothesis. Although we need to discuss the charecteristics of computer vision research and come up with an effective mecanhism for utilization of negative results, the steps taken by these fields will undoubtly be an inspiration to us.

There are also many opinion and research articles in many distinguished journals discussing the role of negative results in science like [Assen et al] and [Ioannidis].

Papers and Dates

Call for Papers Experimental fields typically exhibit a strong bias towards publications with positive results, and computer vision is not an exception. However, negative or inconclusive results are considered fundamental in advancement of the science. In this workshop, we are inviting high quality negative results. Workshop on negative results in computer vision is open to any researcher studying any sub-topic of computer vision who wants to challenge current models, algorithms and datasets. We invite researchers to submit their negative results in any of the sub-topics of computer vision. Although we discourage half-baked results, we strongly believe that "negative" observations and conclusions, based on rigorous experimentation and thorough documentation, ought to be published in order to be discussed, confirmed or refuted by others. In addition, publishing well documented failures may reveal fundamental flaws and obstacles in commonly used methods, or datasets, ultimately leading to advancements in computer vision. In other other words, while being open to any area and most definitions of negative results, we expect rigorousness in the experimental study and clarity in presentation. Submission Guidelines: The paper submission guides, template, and maximum length is the same as the main conference's (8 pages exclusive of references). Please refere to the conference paper submission guidelines for the details and authorkit. CMT submission website Important Dates: Submission Deadline: June 2, 2017 (Anywhere on Earth) Reviews Due: June 16, 2017 Decision Notification: June 23 27, 2017 Camera-ready: June 30 July 3 10, 2017 Workshop Date: July 26, 2017

Organizers