Difference between revisions of "Sketchy sketches"

From DiVersions
Jump to navigation Jump to search
(Created page with "800px|Early sketch * [http://vandal.ist/diversions2019/mim/contours.html MIM Contours] * [http://vandal.ist/diversions2019/mim/sketchrecog.html Ske...")
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
[[File:Splitscreen05.png|800px|Early sketch]]
+
Some "best of" links:
 
 
* [http://vandal.ist/diversions2019/mim/contours.html MIM Contours]
 
* [http://vandal.ist/diversions2019/mim/sketchrecog.html Sketch recognition outcomes]
 
 
 
some "best of" links:
 
 
* Teddy bear ... http://vandal.ist/diversions2019/mim/sketchrecog.html#4306-02
 
* Teddy bear ... http://vandal.ist/diversions2019/mim/sketchrecog.html#4306-02
 
* bird ... ice-cream-cone http://vandal.ist/diversions2019/mim/sketchrecog.html#4290
 
* bird ... ice-cream-cone http://vandal.ist/diversions2019/mim/sketchrecog.html#4290
Line 27: Line 22:
  
 
Algorithms reading algorithms...
 
Algorithms reading algorithms...
 +
 +
<blockquote>Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73% of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56% accuracy (chance is 0.4%). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.<ref>[https://quickdraw.withgoogle.com QuickDraw (with Google)]</ref></blockquote>

Latest revision as of 06:02, 10 September 2019

Some "best of" links:

Rough notes (not for publication ;)

cf Saskia's story of misnaming an instrument. (The end of which was that the African museum contacted wanted not that the instrument be returned, but that the name be updated to reflect the fact that the name incorrectly referred to a larger class of instruments, and not the particular instrument in question)

How explicit do we need to be with our intentionality. Danger: Flatten the potential? Maybe keep it simple / straightforward

Meta data as interstitial frames introducing the sequences of images + sketch predictions.

Algorithms reading algorithms...

Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73% of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56% accuracy (chance is 0.4%). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.[1]