daviderovell0 / bzzzbz Goto Github PK
View Code? Open in Web Editor NEWDigital video synthesizer for live music performance! powered by Raspberry pi + OpenGL
Home Page: https://twitter.com/bzzzbz_video
License: GNU General Public License v3.0
Digital video synthesizer for live music performance! powered by Raspberry pi + OpenGL
Home Page: https://twitter.com/bzzzbz_video
License: GNU General Public License v3.0
I would like to attempt this build, do you have a list of components required to assemble the prototype?
Add PAL/NTSC along with the HDMI out.
It is likely that we are going to have several "FFT frames" for a single video frame transition (at let's say 60fps). This means that if we simply read the FFT value at the time of the update, we lose the FFT values before that.
Instead, we could aggregate all the FFT values in between 2 video frames in some meaningful way (such as gradient / average), and use that value instead.
On the other hand, since the audio sampling and the FFT are very fast, the variation in the FFT frames in the video transition might be minimal, therefore making this issue irrelevant.
(#2)
How much processing power do we need for certain HQ graphics (such as fractals)?
Can the RaspberryPI support them?
Shall we consider generating video on a laptop and make the RaspberryPi send data only?
Implement 4 (?) CV in and 1 clock in. With 4 knobs a single MCP3008 ADC might be enough but in the end might need two of them.
Both hardware and software implementation required.
We need to update the \docs section automatically everytime a commit is merged to master by recompiling the doxygen.
A similar approach needs to be done for testing.
We could use Travis CI tools that are widely used and compatible with github.
Refer to #31. Use the SPI potentiometer thread to update some test parameters in the video generation.
Test on laptop, then on PI.
Hi and thanks for the cool project!
Unfortunately I get this error message after make
command.
Maybe I am just unexperienced (total noob haha), but I followed your instructions!
Attached a screenshot, maybe you can help. thanks a lot! :)
There are (sometimes) errors related to JACK for audio acquisition:
We need to solve this as they tend to freeze / break the program that only starts after several tries.
They might be related to faulty hardware (breadboard + breakout board is not ideal), so we need to check when we will have the final hardware design.
I will post the actual error messages in the comments.
(#2)
Generate a sample image using OpenGL on the Raspberry PI.
Write basic unit tests for the different sections of the project.
Link it to github PR merges for optimal CI.
( #4 )
Write a sample program that reads the input from the ADC connected to the potentiometer to start control off and have a reference for future work.
We need to generate video from FFT using either bitmap format or vector graphics.
The initial step consists in creating basic images from scratch (sound connection later)
PCB with control, audio input outputs and RasperryPi.
Steps:
Now that we have sampled audio data we should start implementing FFT in order to get decent frequency response to have mappable parameters for the video.
Implement shader switching on the fly.
We need to find a way to meaningfully map audio and controls to video. For example a complex track with different instruments might map different frequencies to different video properties (i.e. object colors, size). However, this mapping might not make sense for a sine oscillator sweep for example, since the same "sound changing" would change different video properties.
We might be able to solve that with different types of mapping, giving the user the possibility to change according to its sound
Design and assemble a single board for inputs, outputs and control to substitute the prototype breakout board.
(#4)
Choose the expressive sensors, choose the ADCs (they MUST have already a kernel driver for raspbian).
Make sure that we have enough GPIO to interface with the ADCs and buttons.
Ensure we are under budget.
Even using shaders (running on the pi GPU), a simple video shows some lags on the pi. The larger the window the more visible the lag.
DSP can be used to smooth noisy potentiometer response, BPM detection, filter audio for better audio-reactive ability (with "full songs").
We need to design specifications for the audio processing of bzzzbz video synth.
Currently ADC chip with jack input seems the most reasonable option.
Prototype v2.0 needs to have a box or a sort of enclosure for controls input and outputs:
One of the requirements of the course is to generate documentation using doxygen.
Let's make sure to write our programs with sufficient documentation
Create CMakeLists.txt for compilation and linking (especially for openGL related apps)
Implement the main class structure with constructor, de-constructor, multi threading and plug in the external .h programs for thread routines: control, audio and video.
Concrete design still to define, it will be clearer later. For now we can just implement a general class as described above.
The audio chip appears to be successfully bypassing the signal. The output is stable on the line out jack but not in the headphones out where the audio seems to miss the bass frequencies.
We need to figure out where this happens and fix it: it might be a circuit issue of the breakout board or a driver configuration.
(#2)
Shaders should provide a flexible way to generate and manipulate video in real time through GPU.
We need to make a simple repetitive video that uses shaders, the basic workflow should be:
----> test on the Raspberry Pi.
We need to ensure real rime execution on the Pi, further testing might be required.
Merge the control and audio breakout boards and expand the for the new features.
Further develop the simple potentiometer sample program (#8) to have an array of at least 4 potentiometers with data being read simultaneously (real time) from all 4.
Since the pots ADC uses SPI, the main idea is to write a loop that keeps switching SPI slave and reading each input in turn, fast enough to appear simultaneous. We might need to use /dev direct access programming instead of /sys library such as WiringPI to achieve higher speed.
Now that openGL is installed and running we need to get familiar with OpenGL functions and workflow in order to manipulate videos
CMakeVersionRequired is too high (3.10) since it is incompatible for some Linux distributions.
The nannou project (https://github.com/nannou-org/nannou) is a collection of code aimed at making it easy for artists to express themselves written in RUST.
It could be a suitable alternative since Rust is as fast as C therefore satisfies real-time requirements.
We could:
We need to check if using Rust is fine for the course assessment
We need to come up with a design for an expressive surface to control the video "synthesis" parameters.
We need to:
Software needed to operate the hardware and to add functionalities that are not related to audio-video.
We need to drive a display interactively to display either shaders name or the current information.
Right now the shaders and the code is manually loaded. It could be nice to have some sort of interface to quickly load anything to bzzzbz.
Right now the pi shows visible delay when displaying simple figures and changing some of the features (such as colours and position) after pressing a key. The image is displayed through raspbian desktop GUI.
We have to check if that can be a problem and consider alternative solutions.
Make sure to write guide and docs to conform to community standards on github: contributing, readme etc..
A checklist can be found in insights - Community
At the moment we use the sysfs method to interact with GPIOs via the gpio-sysfs.h class. The interface is now old and deprecated, it should be replaced with something else. The best alternatives seems to be libgpiod, a C library coming with CLI utility tools that has been added to the Linux Kernel and therefore seems to be the safest in term of future support. Plus it doesn't add extra dependencies.
libgpiod in kernel.org: https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/about/
Useful info and discussions:
Include a BOM (through KiCad) to keep track of the board cost.
We need to route the joystick`s output to the ADC.
Make it on breadboard
Now that we have all the different sections working separately we need to put everything together in a main program. The most relevant points are:
I suggest to do this in 2 separate merging: video<->controls and audio<->FFT. We can then proceed to merge the 2 into a single program and implement the mapping.
This will give us our first working prototype!
Use jack thread to fill a buffer with incoming audio @ 48Khz, trigger an interrupt for the FFT processing.
Reasonable buffer sizes are 64 and 32 samples giving respectively 33 and 17 bins to parameterize the video.
Refer to release v1.0 -> #31
Hi,
I dont get the program to work (on an Ubuntu 20.10):
$ ./src/bz
terminate called after throwing an instance of 'char const*'
Aborted (core dumped)
I never really learned C++, so before spending hours debugging, I ask if this is a known thing, or if there is a recommended Linux version where it should work, or if I better only try with a Raspberry device.
Thanks in advance! The project looks very promising and I would love to be able to use it! ;)
Get the audio data from the audio codec (WM8731).
plan:
Write software to drive the new on-board screen. The idea is to have a classic menu to navigate between options with the 3 buttons + fn button (6 key combinations).
For the beginning, it would be good to have a display class with basic menu features:
Real menu options to define when the features come along together.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.