Difference between revisions of "Sketchy sketches"

From DiVersions
Jump to navigation Jump to search
 
Line 1: Line 1:
 +
Some "best of" links:
 +
* Teddy bear ... http://vandal.ist/diversions2019/mim/sketchrecog.html#4306-02
 +
* bird ... ice-cream-cone http://vandal.ist/diversions2019/mim/sketchrecog.html#4290
 +
* A panda encircled by a guitar ... http://vandal.ist/diversions2019/mim/sketchrecog.html#1981.039-01
 +
* screwdriver, toothbrush, baseball bat ... http://vandal.ist/diversions2019/mim/sketchrecog.html#0490-01
 +
* binoculars http://vandal.ist/diversions2019/mim/sketchrecog.html#3206
 +
* piano - laptop http://vandal.ist/diversions2019/mim/sketchrecog.html#2012.072
 +
* Cell phone http://vandal.ist/diversions2019/mim/sketchrecog.html#3873-02
 +
* Moon, sun, TV http://vandal.ist/diversions2019/mim/sketchrecog.html#3873-03
 +
* rifle ... toothbrush http://vandal.ist/diversions2019/mim/sketchrecog.html#2001.051
 +
* rifle ... hourglass http://vandal.ist/diversions2019/mim/sketchrecog.html#2012.036.002
 +
 
=== Rough notes (not for publication ;) ===
 
=== Rough notes (not for publication ;) ===
 
cf Saskia's story of misnaming an instrument.
 
cf Saskia's story of misnaming an instrument.

Latest revision as of 06:02, 10 September 2019

Some "best of" links:

Rough notes (not for publication ;)

cf Saskia's story of misnaming an instrument. (The end of which was that the African museum contacted wanted not that the instrument be returned, but that the name be updated to reflect the fact that the name incorrectly referred to a larger class of instruments, and not the particular instrument in question)

How explicit do we need to be with our intentionality. Danger: Flatten the potential? Maybe keep it simple / straightforward

Meta data as interstitial frames introducing the sequences of images + sketch predictions.

Algorithms reading algorithms...

Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73% of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56% accuracy (chance is 0.4%). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.[1]