TL;DR: Yes, we can! (and by “we,” we mean a novel AI model). We accomplish this by sensing the tiny vibrations on the bottle’s surface and relating them to the bottle’s liquid level and type (i.e., Coke bottle, milk carton, etc.).
First, we must capture these tiny surface vibrations at high speeds. To achieve this, we develop a novel computational imaging system that can capture object vibrations remotely for a 2D grid of scene points.
We use our system to sense the vibrations of multiple scene containers at once, while we "excite" the vibrations using a nearby speaker by playing some sound.
capturing six containers
camera system
For each container, we measure two-axis vibrations at multiple surface points (three points in the figure above). We input the vibrations into a novel physics-inspired Vibration Transformer, which is trained to predict the container type and its hidden liquid level.
The relationship between the vibration response of everyday containers to their fill level is actually quite complex. Don’t believe us? Try for yourself. Press below the containers to hear their response for different fill levels. Can you spot a pattern?
If you couldn’t, don’t blame yourself. The vibrational response depends on various factors, including object geometry, materials, fluid-structure interactions, and additional factors. Luckily, our Vibration Transformer has a more acute “ear” than you, and can successfully classify the container’s hidden liquid level for a variety of containers (which we demonstrate experimentally).
Moreover, our architecture is robust to the vibration source, yielding correct liquid level estimates for controlled and ambient scene sound sources. And, our model generalizes to unseen container instances within known classes (e.g., training on five Coke cans of a six-pack, testing on a sixth) and fluid levels.
@inproceedings{Kichler:2025, title={Learning to See Inside Opaque Liquid Containers using Speckle Vibrometry}, author={Kichler, Matan and Bagon, Shai and Sheinin, Mark}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2025} }