Can a “camera” have no lens or sensor? With Paragraphica, it can. They’ve developed a strange “context-to-image” device that interprets the place around it.

So, uh, what is it?

Paragraphica is a “context-to-image” “camera” that uses location data and artificial intelligence to determine a “photo” of a specific place and moment. The “camera” is a physical prototype, but it’s also a virtual “camera” that you can try. If the server has not crashed!

Tweet from Bjørn Karmann about their crashing server.

Dutch engineer Bjørn Karmann has developed a “camera” straight out of a science fiction movie. It has no lens. It does, however, have a strange red spiderweb-like attachment in the front. This apparatus was inspired by the star-nosed mole. This animal uses its nose to navigate its environment. Since the camera does not take in light, this is a perfect metaphor.

The viewfinder displays a real-time description of your location. When you press the button, the “camera” creates what they describe as a “scintigraphic representation of the description.” Three dials on the top of the device allow you to control the data and AI parameters that influence the appearance of the image.

What on earth is scintigraphy?

Of course, I had to look up “scintigraphy.” The National Cancer Institute describes it as “a procedure that produces pictures (scans) of structures inside the body, including areas where there are cancer cells.” Wikipedia states that it’s also known as a gamma scan and is a diagnostic test in nuclear medicine. 

What do the images look like?

Below are some examples. You may also use their virtual “camera” to create an image.

Image courtesy of Bjørn Karmann
Image courtesy of Bjørn Karmann

What data does the device use to create the images?

According to their website, they collect using open APIs (Application Programming Interfaces, software that allows two applications to communicate with each other) to determine the location, weather, time of day and nearby places. It combines all those, then composes a paragraph that describes the place and moment. The device then converts that paragraph into an image. It’s basically an interpretation of how the AI model “sees” the place at that time.

A photo of the scene that Paragraphica will interpret. Image courtesy of Bjørn Karmann.
Image produced by Paragraphica AI camera
The image that Paragraphica generated from the location. Image courtesy of Bjørn Karmann.

Karmann states, “Interestingly the photos do capture some reminiscent moods and emotions from the place but in an uncanny way, as the photos never really look exactly like where I am.”

The device uses Raspberry Pi 4 processor to run the text-to-image generator, Stable Diffusion API, Noodl and Python Code, according to Karmann’s website.

Paragraphica AI camera

What do the dials do?

  • The first dial controls the radius of the area that the device searches.
  • The second dial controls what is sort of like “film grain.”
  • The third dial controls how closely the AI follows the paragraph. Karmann uses the analogy of sharpening or blurring in a camera.

Why do you keep using quotes around “camera” and “photo?”

I’ve typically thought of a camera as a device that captures light. Whether digital or analog, this is what cameras have typically done. This is how Merriam-Webster, Oxford and Wikipedia generally describe it. Paragraphica might create images, but it’s not really a camera.

And they’re not photos. A photograph, after all, is made using a camera, and the very name indicates “light.”

Is this splitting hairs? To me, no. What’s your opinion?

Also, is this something that will inspire you to experiment? Let us know in the comments.