Age | Commit message (Collapse) | Author |
|
|
|
You provide a tempo by playing the metronome on the MIDI keyboard. Then
press M-m to have the sequencer continue the same tempo on the MIDI drum
channel.
|
|
|
|
|
|
Without this, the (several day-long) dumped.mid file ends up
invalid (presumably because the largest relative time does not fit into
the available number of bits).
Arguably this is a bug in Codec.Midi, or anyway it's an undocumented
limitation. The proper solution is probably somewhat
complicated/convoluted: do a tempo change before and after very long
time differences, when (or before) serializing the Midi data.
There's little point in implementing that fix though, because this giant
midi file is not practically useful anyway.
|
|
|
|
The filename is generated automatically from the date of the earliest
event in the recording.
|
|
No behavior should be affected.
|
|
(Currently accessible only under testing command "save".)
|
|
The triad detection now returns the correct type (detecting all triads
on all channels, and saving the channel). This information is still not
used; the future is to call filters for each channel separately, so
there is no point making an individual filter operate on multiple
channels simultaneously.
|
|
That is, set the velocity to the average of the velocity of the
individual triad keys.
Also, output the triads to channel 0 instead of channel 1. Previously,
all output had been changed to channel 1 to facilitate playing live on
channel 0 on top of playback on channel 1. However, it was discovered
that the external MIDI synthesizer built into my MIDI keyboard does not
listen on channel 1.
|
|
(actually pitch maps, now)
|
|
I.e., any triads played will have additional notes played at the root &
fifth of the octave above and below the triad.
Eventually I want to program the triads using the keyboard itself, so
that the chords can be "filled" arbitrarily.
This also shows the need to represent the pressed key set differently
than as a Set of (channel, pitch) pairs: the velocity needs to be saved,
so that the "fill" notes can use it (probably use the average of the
triad).
Furthermore the whole infrastructure needs to be designed around the
concept of input channels mapping to output channels. Filters (such as
the triad filter) should be applied to channels -- right now, the
assumption of a single channel has been baked in in several places, but
this will eventually interfere with things like looping.
(Playing back the input needs to be able to play back the filters that
were in place on the input. Although, note: we also want to record
output and primarily play that back.)
|
|
The live output goes to channel 0. This prevents the two from messing
with each other (cancelling each others' notes).
Simply hard-coding channel 1 is a bad idea long-term, though. We
actually could have input from more than one channel (if we pump a midi
file into the program with "aplaymidi", for example). I suppose we want
to map the entire range of channels based on the input source in order
to prevent input sources from affecting one another. Then generated
playback could be considered a separate input source.
All of this will have to be done in order to deal with simultaneous
looping of multiple tracks, anyway.
|
|
Currently, the "dump" command plays the entire database. Only the SQL
SELECT statement needs to be changed in order to play a specific
time-range.
|
|
Also whitespace, comments, & other non-functional changes.
|
|
This format is still inefficient -- because the time is still
represented as a string that is parsed as an Integer -- but it's
certainly much more efficient than before. And more importantly, it can
be both written and read.
|
|
|
|
My intent in doing this was to have a format suitable for reading
and writing to the database; the old format (using "show" on the
Sound.ALSA.Sequencer.Event.T) could not be read back.
Unfortunately, this new format cannot be read back _OR WRITTEN_!
At least not without more conversion (of my list of pairs into a
Thielemann's specialized event-list).
My new plan is to use HCodec's Codec.Midi.Message instead. But I'll
commit this before I get to that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(Replay last MIDI input, divided by 3 second periods of silence.)
This has a bug where if there is too much input to replay, the program
exits. (Oddly, there is no exit failure code.)
I think this is because the kernel ALSA buffer is full. The solution is
to implement in-application queueing.
|
|
|
|
|
|
|
|
This just sets the time field on the ALSA packet. It works!
I had planned to do something much more complicated but if this works
out, everything is easy.
|
|
|
|
|
|
|
|
Start and end times, and leading silence duration, are all available
through SQL.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
this doesn't actually seem to make it faster, so something is wrong.
|
|
this is too slow; there's a visible delay as the sql statement executes.
the plan is to run this in a separate thread
|
|
|
|
|
|
|