Christmas 2017: Yule tree Horned God decoration

Crude felt face with large dramatic eyebrows, googly eyes, and bright red lips
Crude felt face with large dramatic eyebrows, googly eyes, and bright red lips, atop a small yule tree.  Crude felt face with large dramatic eyebrows, googly eyes, and bright red lips, atop a small yule tree. Spooky shadows on the wall
info: https://en.wikipedia.org/wiki/Horned_God

Some thoughts on the importance of names

I often go to a lot of effort finding, and using, the correct spelling of a person's name, if that person's name originates from a non-Latin alphabet (Cyrillic, Hebrew, Chinese, logographic, etc.). Sometimes I don't even bother writing the transliterated version in brackets after it, if writing space is limited (e.g. Twitter).

Writing a person's name in a local character-set is okay for phonetical ease, but it should never be confused with the persons actual real name.

Even if one dislikes a person, one should do their best to spell the name as it should be spelled, in the correct alphabet. It is not politeness, it is correctness.

There is also the heinous practice [as personally witnessed in Irish schools], where a person's name is translated between languages. Not just transliteration, but actual translations of the words used in a name.¹

You cannot translate a name. A name is syntax, not semantics. The person is the semantic.

¹ Example: "Game of Thrones" translation into Irish, changes 'Jon Snow' into a literal 'Seán an tSneachta' https://twitter.com/DirkVanBryn/status/790155947490549760
N.B. I am still undecided about the process of reversing name order during transliteration (as can happen with 
Asian names).

Summer in February (2013)



An unfortunate film. What could have been an engaging and scenic period-drama was ultimately ruined by bad story telling. Based on a true story, perhaps the film-makers were just too familiar with the subject material, and got lost sketching a field of mere junctures, only hinting at the stories and personalities beyond. The film lacks the zest of surrealism to excuse the, frankly, bizarre choices made by the characters.



info: https://en.wikipedia.org/wiki/Summer_in_February
discussion: https://www.filmboards.com/board/12184287/

FFmpeg: Temporal slice-stacking ['slit-scan'] effect (aka 'Wobbly Video')


An old video effect¹, experimented with in 2013 (using processing.org)², revisited now again using FFmpeg³. The concept is to take one line of pixels (x or y) of a frame, relative to its position in the video, and stack those lines into a new frame. Then increment their starting point while progressing through the timeline of the video.

This is somewhat similar to the effect commonly seen (these days) with the "rolling shutter" artefact of certain digital photography. See 'Ancillary footage' at the bottom of the post for an overlay version that may help as visual aid in understanding.

In the demonstration above (and longer videos below) the frame is divided into four quadrants: Top-left is the original; top-right are horizontal stacked frames (btt); bottom-left are vertical stacked frames (rtl); bottom-right are vertical-stacked frames that have then been stacked horizontally.

#!/bin/bash
# Temporal slice-stacking effect with FFmpeg (aka 'wibbly-wobbly' video).
# See 'NOTES' at bottom of script.

# Ver. 2017.10.01.22.14.08
# source: http://oioiiooixiii.blogspot.com

function cleanUp() # tidy files after script termination
{
   rm -rf "$folder" \
   && echo "### Removed temporary files and folder '$folder' ###"
}
trap cleanUp EXIT

### Variables
folder="$(mktemp -d)" # create temp work folder
duration="$(ffprobe "$1" 2>&1 | grep Duration | awk  '{ print $2 }')"
seconds="$(echo $duration \
         | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }' \
         | cut -d '.' -f 1)"
fps="$(ffprobe "$1" 2>&1 \
      | sed -n 's/.*, \(.*\) fps,.*/\1/p' \
      | awk '{printf("%d\n",$1 + 0.5)}')"
frames="$(( seconds*fps ))"
width="640" # CHANGE AS NEEDED (e.g. width/2 etc.)
height="360" # CHANGE AS NEEDED (e.g. height/2 etc.)

### Filterchains
stemStart="select=gte(n\,"
stemEnd="),format=yuv444p,split[horz][vert]"
horz="[horz]crop=in_w:1:0:n,tile=1x${height}[horz]"
vert="[vert]crop=1:in_h:n:0,tile=${width}X1[vert]"
merge="[0:v]null[horz];[1:v]null[vert]"
scale="scale=${width}:${height}"

#### Create resized video, or let 'inputVideo=$1'
clear; echo "### RESIZING VIDEO (location: $folder) ###"
inputVideo="$folder/resized.mkv"
ffmpeg -loglevel debug -i "$1" -vf "$scale" -crf 10 "$inputVideo" 2>&1 \
|& grep 'frame=' | tr \\n \\r; echo

### MAIN LOOP
for (( i=0;i<"$frames";i++ ))
do
   echo -ne "### Processing Frame: $i of $frames  ### \033[0K\r" 
   ffmpeg \
   -loglevel panic \
      -i "$inputVideo" \
      -filter_complex "${stemStart}${i}${stemEnd};${horz};${vert}" \
      -map '[horz]' \
         -vframes 1 \
         "$folder"/horz_frame${i}.png \
      -map '[vert]' \
         -vframes 1 \
         "$folder"/vert_frame${i}.png
done

### Join images (optional sharpening, upscale, etc. via 'merge' variable)
echo -ne "\n### Creating output videos ###"
ffmpeg \
   -loglevel panic \
   -r "$fps" \
   -i "$folder"/horz_frame%d.png \
   -r "$fps" \
   -i "$folder"/vert_frame%d.png \
   -filter_complex "$merge" \
   -map '[horz]' \
      -r "$fps" \
      -crf 10 \
      "${1}_horizontal-smear.mkv" \
   -map '[vert]' \
      -r "$fps" \
      -crf 10 \
      "${1}_verticle-smear.mkv"

### Finish and tidy files 
exit

### NOTES ######################################################################

# The input video is resized to reduce frames needed to fill frame dimensions 
# (which can produce more interesting results). 
# This is done by producing a separate video, but it can be included at the 
# start of 'stemStart' filterchain to resize frame dimensions on-the-fly. 
# Adjust 'width' and 'height' for alternate effects.

# For seamless looping, an alternative file should be created by looping
# the desired section of video, but set the number of processing frames to 
# original video's 'time*fps' number. The extra frames are only needed to fill 
# the void [black] area in frames beyond loop points.

download: ffmpeg_wobble-video.sh

Alien:Covenant (2017)



Milking a dead horse: I kept asking myself while watching, "but haven't we seen this film before‽"

Related note:



info: https://en.wikipedia.org/wiki/Alien:_Covenant

FFmpeg: Rainbow trail chromakey effect



UPDATE 2020-07-20: A new version has been published with significant improvements!
https://oioiiooixiii.blogspot.com/2020/07/ffmpeg-improved-rainbow-trail-effect.html

An effect loosely inspired by old Scanimate¹ analogue video effects. The process involves stacking progressively delayed, and colourised, instances of the input video on top of each other. These overlays are blended based on a chosen colourkey, or chromakey. The colour values and number of repetitions can be easily changed, though with higher numbers [in test cases, 40+], buffer underflows may be experienced.

#!/bin/bash

# Generate ['Scanimate' inspired] rainbow trail video effect with FFmpeg
# (N.B. Resource intensive - consider multiple passes for longer trails) 
# version: 2017.08.08.13.47.31
# source: http://oioiiooixiii.blogspot.com

function rainbowFilter() #1:delay 2:keytype 3:color 4:sim val 5:blend 6:loop num
{
   local delay="PTS+${1:-0.1}/TB" # Set delay between video instances
   local keyType="${2:-colorkey}" # Select between 'colorkey' and 'chromakey'
   local key="0x${3:-000000}"     # 'key colour
   local chromaSim="${4:-0.1}"    # 'key similarity level
   local chromaBlend="${5:-0.4}"  # 'key blending level
   local colourReset="colorchannelmixer=2:2:2:2:0:0:0:0:0:0:0:0:0:0:0:0
                     ,smartblur"
   # Reset colour after each colour change (stops colours heading to black)
   # 'smartblur' to soften edges caused by setting colours to white

   # Array of rainbow colours. Ideally, this could be generated algorithmically
   local colours=(
      "2:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0" "0.5:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0"
      "0:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0" "0:0:0:0:2:0:0:0:0:0:0:0:0:0:0:0"
      "2:0:0:0:2:0:0:0:0:0:0:0:0:0:0:0" "2:0:0:0:0.5:0:0:0:0:0:0:0:0:0:0:0"
      "2:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0"
   )

   # Generate body of filtergraph (default: 7 loops. Also, colour choice mod 7)
   for (( i=0;i<${6:-7};i++ ))
   {
      local filter=" $filter
                     [a]$colourReset,
                        colorchannelmixer=${colours[$((i%7))]},
                        setpts=$delay,
                        split[a][c];
                     [b]colorkey=${key}:${chromaSim}:${chromaBlend}[b];
                     [c][b]overlay[b];"
   }
   printf "split [a][b];${filter}[a][b]overlay"
}

ffmpeg -i "$1" -vf "$(rainbowFilter)" -c:v huffyuv "${1}_rainbow.avi"
download: ffmpeg_rainbow-trail.sh

This is a top-down approach to building the effect. Another [possibly better] solution is to build the layers from the bottom up (pre-calculate the PTS delay for each layer i.e. "layer number x PTS delay"). This might improve the fidelity of the top layer in certain videos. Another idea is split input into three instances rather than two, and 'key overlay the third at the very end of the filter.


A concatenation of all videos generated during testing and development.

¹ Scanimate video synthesizer: http://scanimate.com/
original video: https://www.youtube.com/watch?v=god7hAPv8f0

FFmpeg: Extract section of video using MPV screen-shots



An unorthodox, but simple and effective way of accurately extracting a section of video from a larger video file, using MPV screen-shots (with specific file naming scheme) for 'in' and 'out' points.

Bash commands below serve only to demonstrate the general idea. No error handling whatsoever.
#!/bin/bash
# Extract section of video using time-codes taken from MPV screen-shots
# Requires specific MPV screen-shot naming scheme: screenshot-template="%f__%P"
# N.B. Skeleton script demonstrating basic operation

filename="$(ls -1 *.jpg | head -1)"
startTime="$(cut -d. -f-2 <<< "${filename#*__}")"
filename="${filename%__*}"
endTime="$(cut -d_ -f3 <<<"$(ls -1 *.jpg | tail -1)" | cut -d. -f-2)"
ffmpeg \
   -i "$filename" \
   -ss "$startTime" \
   -to "$endTime" \
   "EDIT__${filename}__${startTime}-${endTime}.${filename#*.}"
Another approach to this (and perhaps more sensible) is to script it all through MPV itself. However, that ties the technique down to MPV, whereas, this 'screen-shot' idea allows it to be used with other media players offering timestamps in the filename. Also, it's a little more tangible: you can create a series of screen-shots and later decide which ones are timed better.

video shown in demo: “The Magic of Ballet With Alan and Monica Loughman" DVD (2005)

UPDATE: September 11, 2018



I recently happened upon a MPV lua script created for practical video extraction.


It really works well, and I now find myself using it every time I need to clip a section of video.
link: https://github.com/ekisu/mpv-webm

FFmpeg: Simple video editor with Zenity front-end


A proof-of-concept implementation of a simple, but extensible, video editor, based on FFmpeg with a Zenity interface. Presented as a proof-of-concept as bugs still remain and the project is abandoned.

The goal was to create a video editor with the simplest of requirements; tools found on most popular GNU/Linux distributions, with a standard installation of FFmpeg (+FFplay). The original idea was to create just a video clipping tool, however, the facility to add functionality was included. Extending the functionality involved creating separate scripts with Zenity dialogs relating to the feature added.

While the effectiveness of the implementation is questionable, some interesting concepts remain, such as: a scrub-bar for FFplay created from just a set of filterchains, a novel approach to referencing time-stamps from FFplay, and the correct FFmpeg switches to force edit video clips on non-keyframes (the results of which are demonstrated in the video above).

download: ZenityVideoEditor_0.1.tar.gz

original version: 2016, 15th February https://twitter.com/oioiiooixiii/status/699239047806500864

Filthy Frank: "Pinku Dragon"


The effect was created by first extracting the foreground in Flowblade video editor [the FFmpeg version of this process was demonstraited previously¹]


In the animation, the Flowblade window is shown. The blur amount of a filter on the mask is being altered, which demonstraits the mask area growing and shrinking, and the edges of the mask get fuzzier.

Each cut-out frame was then sequentially merged, and overlayed on a background frame, creating the final animation. However, this was not the original goal of the process. If it were, it may have been easier to create using ImageMagick's stacking functionality²

[ Demonstrating the full procedure involved in foreground extraction using Flowblade may appear in a future blog post ]
¹ see also: https://oioiiooixiii.blogspot.com/2016/09/ffmpeg-extract-foreground-moving.html
² see also: https://oioiiooixiii.blogspot.com/2017/01/long-exposure-photography-compared-to.html
source video: https://www.youtube.com/watch?v=S6bQibFNs2E
original upload: (Twitter) August 5, 2016

Deceptive 2002 'Denny' Euro conversion calculator



Bash: Youtube-dl script which adds channel-id to video filenames

UPDATE: 2020-01-15
# A simple 'youtube-dl' one-liner that can replace everything else the script
youtube-dl "$1" -f 'bestvideo+bestaudio' -o "%(channel_id)s---%(id)s---%(title)s.%(ext)s"

Sometime between publishing this script (including subsequent unpublished versions), and now, 'youtube-dl' has added 'channel_id' as a output filename descriptor, therefore making this code largely defunct.

more info: https://github.com/ytdl-org/youtube-dl#output-template

#!/bin/bash
################################################################################
# Download YouTube video, adding 'channel ID' to downloaded video filename¹
# - Arguments: YouTube URL
# source: https://oioiiooixiii.blogspot.com
# version: 2017.11.26.00.04.10
# - Fixed filename issue where youtube-dl uses "mkv" container  
# -------  2017.11.11.15.19.06
# - Changed 'best (mp4)' to best anything (for vp9 4K video) 
# -------  2017.08.05.22.50.15
################################################################################

# Checks if video already exists in folder (checks YouTube ID in filename)
[ -f *"${1##*=}.m"* ] \
   && echo "*** FILE ALREADY EXISTS - ${1##*=} ***" \
   && exit

# Download html source of YouTube video webpage
html="$(wget -qO- "$1")"

# Extract YouTube channel ID from html source
channelID="$(grep channelId <<<"$html" \
            | tr \" \\n \
            | grep -E UC[-_A-Za-z0-9]{21}[AQgw])"

# Download best version of YouTube video
youtube-dl -f 'bestvideo+bestaudio' \
            --add-metadata \
            "$1"

# Download best (MP4) version of YouTube video
#youtube-dl -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' \
#            --add-metadata \
#            "$1"

# Get filename of video created by youtube-dl
filename="$(find . -maxdepth 1 -name "*${1##*=}.m*" \
            | cut -d\/ -f2)"

# Rename filename
echo "Renaming file to: ${channelID}_${filename}"
mv "$filename" "${channelID}_${filename}"

### NOTES ######################################################################

# ¹2017, May 21: Waiting for this to be implemented in youtube-dl
# youtube-dl -v -f137+140 -o '%(channel_id)s-%(title)s-%(id)s.%(ext)s'
# https://github.com/rg3/youtube-dl/issues/9676
download: ytdl.sh
context: https://github.com/rg3/youtube-dl/issues/9676

【中森明菜】:『少女A』(Akina Nakamori: "Girl A") - 1982, November 25th


source video: https://www.youtube.com/watch?v=kO2meEexNsE

FFmpeg: 144 (16x9) grid of random Limmy Vine videos


It would have been nice to complete this all in one FFmpeg command (building such a command is a relatively trivial 'for loop' affair¹) but the level of file IO made this impossible (for my setup at least). Perhaps with smaller file sizes and fewer videos, it would be less impractical.

# Some basic Bash/FFmpeg notes on the procedures involved: 

# Select random 144 videos from current folder ('sort -R' or 'shuf')
find ./ -name "*.mp4" | sort -R | head -n 144

# Generate 144 '-i' input text for FFmpeg (files being Bash function parameters)
echo '-i "${'{1..144}'}"'
# Or use 'eval' for run-time creation of FFmpeg command
eval "ffmpeg $(echo '-i "${'{1..144}'}"')"

# VIDEO - 10 separate FFmpeg instances

# Create 9 rows of 16 videos with 'hstack', then use these as input for 'vstack'
[0:v][1:v]...[15:v]hstack=16[row1];
[row1][row2]...[row9]vstack=9
# [n:v] Input sources can be omitted from stack filters if all '-i' files used

# AUDIO - 1 FFmpeg instance

# Mix 144 audio tracks into one output (truncate with ':duration=first' option)
amix=inputs=144

# If needed, normalise audio volume in two passes - first analyse audio
-af "volumedetect"
# Then increase volume based on 'max' value, such that 0dB not exceeded 
-af "volume=27dB"

# Mux video and audio into one file
ffmpeg -i video.file -i audio.file -map 0:0 -map 1:0 out.file

# Addendum: Some other thoughts in reflection: Perhaps piping the files to a FFmpeg instance with a 'grid' filter might simplify things, or loading the files, one by one, inside the filtergraph via 'movie=' might be worth investigating. 
¹ See related: https://oioiiooixiii.blogspot.com/2017/01/ffmpeg-generate-image-of-tiled-results.html
context: https://en.wikipedia.org/wiki/Limmy
source videos: https://vine.co/Limmy

FFmpeg: Predator [1987 movie] "Adaptive Camouflage" chromakey effect


A simple Bash script invoking FFmpeg to create a similar "cloaking" effect as seen the 1987 film "Predator"¹. It needs a little bit more work to make it more accurate; perhaps adjusting curves or levels for each iteration, to make them more defined, etc.

#!/bin/bash

# Create Predator [1987 movie] "Adaptive Camo" chromakey effect in FFmpeg
# - Takes arguments: filename, colour hex value (defaults to green).
# ver. 2017.06.25.16.29.43
# source: http://oioiiooixiii.blogspot.com

function setDimensionValues() # Sets global size variables based on file source 
{
   dimensions="$(\
      ffprobe \
      -v error \
      -show_entries stream=width,height \
      -of default=noprint_wrappers=1 \
      "$1"\
   )"
      
   # Create "$height" and "$width" var vals
   eval "$(head -1 <<<"$dimensions");$(tail -1 <<<"$dimensions")"
}

function buildFilter() # Builds filter using core filterchain inside for-loop
{
   # Set video dimensions and key colour
   setDimensionValues "$1"
   colour="0x${2:-00FF00}"
   oWidth="$width"
   oHeight="$height"
   
   # Arbitary scaling values - adjust to preference
   for ((i=0;i<4;i++))
   {
      width="$((width-100))"
      height="$((height-50))"
      printf "split[a][b];
            [a]chromakey=$colour:0.3:0.06[keyed];
            [b]scale=$width:$height:force_original_aspect_ratio=decrease,
               pad=$oWidth:$oHeight:$((width/4)):$((height/4))[b];
            [b][keyed]overlay,"
   }
   printf "null" # Deals with hanging , character in filtergraph
}

# Generate output
ffplay -i "$1" -vf "$(buildFilter "$@")"
#ffmpeg -i "$1" -vf "$(buildFilter "$@")" -an "${1}_predator-fx.mkv"
video source: https://www.youtube.com/watch?v=7UdhuPnWpHA
¹ film: https://en.wikipedia.org/wiki/Predator_(film)
context: https://twitter.com/oioiiooixiii/status/868527906682789889
context: https://twitter.com/oioiiooixiii_/status/868614704394055680

Bash: Irish Weather


function weather2morrow() 
{ 
   echo "Weather forecast for"\
        "$(date +%A,\ %d\ %B\ %Y --date=tomorrow):"\
        "Sunny spells & scattered showers 🌦 " 
}

Bonus content from: June, 2012


A "good Summer" means high humidity...

Bonus content from: June, 2009


iGoogle weather widget showing forecast...

The United States May Drop Kim Kardashian's Ass On Iran

Image shows Kim Kardashian in jeans, white t-shirts, and red high-heel shoes, falling towards Iran, after being dropped from a B-2 stealth bomber

America is currently spending tens of millions of dollars on developing a bomb that is capable of penetrating and destroying Iranian nuclear facilities deep within the country's mountain ranges¹.

A nuclear strike would of course be more than enough to do the job, but even unscrupulous America would be incapable of delivering such a ironic plan as to use nuclear warfare to stop a hypothesised nuclear war, so they set Lockheed Martin and AFRL the task of developing a large conventional bomb to do the task.

Now a new radical new idea has sprung forth from the crazy minds of US' military R&D*: they plan to turn Kim Kardashian's buttocks into a Massive Ordnance Penetrator (MOP).

Close-up of Kim Kardashian squatting in white dress. The dress is covered in U.S. air-force insignia, including American flag and seal, and phrases: '50 Megatons' and 'God bless America'

The military scientists believe that due to the size and consistency of the celebrities rear (denser than Plutonium), a drop from above the stratosphere would result in a 50 megaton explosion and an impact crater 2 miles in diameter. Tests are currently being run to see if there is a risk of nuclear fallout being released from the "KD" (Kardashian Device). They say there is a small chance of a toxic release on the way down.

Mahmoud Ahmadinejad in sunglasses pointing at camera with 'Drop dat ass' written below it
Iranian president, محمود احمدی‌نژاد (Mahmoud Ahmadinejad), gave a chilling reply to the news of America's development of this new biological weapon. Apparently goading the United States' into war, he is on the record as saying: "Do it! Drop dat ass".

¹https://www.wsj.com/articles/SB10001424052970203363504577187420287098692
²Rear-search and Development

content originally published: June, 2012

MacMillan's "The Rite of Spring" + Magma - 'Slag Tanz'.








source video: https://www.youtube.com/watch?v=GEOi4ZzUud4

Bash: Glitch images using sed



Once an image is converted to text (via ImageMagick) any form of text manipulation can be used (not just sed) to create different results. The process is slow however, and should be seen as a novelty rather than a legitimate alternative to pixel-array manipulation in a different programming environment.

# Use ImageMagick to convert image to text file
convert merkel.jpg merkel.txt

# sed replaces every occurrence of $i value with '0' (except in first line)
for (( i=0; i<255; i++))
{ 
   sed '1! s,'"$i"',0,g' < merkel.txt \
   | convert - "merkel_$i.png"
}
Animated effects can also be interesting...

Bash: Watch 5 QVC YouTube live-streams at once, using MPV


For no other reason than "just because". The Bash script is generic enough to be used in other scenarios. Most work is done by MPV and its '--ytdl-format' option. A small delay is added before each mpv call, to avoid swamping YouTube with concurrent video requests.

#!/bin/bash

# Stream 5 QVC YouTube live-streams simultaneously.
# - Requires 'mpv' - N.B. Kills all running instances of mpv when exiting.
# - See YouTube format codes, for video quality, below.
# ver. 2017.02.11.21.46.52

### YOUTUBE VIDEO IDS ##########################################################

# QVC ...... USA .......... UK ........ Italy ....... Japan ...... France
videos=('2oG7ZbZnTcA' '8pHCfXXZlts' '-9RIKfrDP2E' 'wMo3F5IouNs' 'uUwo_p57g5c')

### FUNCTIONS ##################################################################

function finish() # Kill all mpv players when exiting
{
  killall mpv
}
trap finish EXIT

function playVideo() # Takes YouTube video ID
{
   sleep "$2" # The "be nice" delay
   mpv --quiet --ytdl-format 91 https://www.youtube.com/watch?v="$1" 
}

### BEGIN ######################################################################

for ytid in "${videos[@]}"; do ((x+=2)); (playVideo "$ytid" "$x" &); done
read -p "Press Enter key to exit"$'\n' # Hold before exiting
#zenity --warning --text="End it all?" --icon-name="" # Zenity hold alternative
exit

### FORMAT CODES ###############################################################

# format code  extension  resolution note
# 91           mp4        144p       HLS , h264, aac  @ 48k
# 92           mp4        240p       HLS , h264, aac  @ 48k
# 93           mp4        360p       HLS , h264, aac  @128k
# 94           mp4        480p       HLS , h264, aac  @128k
# 95           mp4        720p       HLS , h264, aac  @256k
# 96           mp4        1080p      HLS , h264, aac  @256k

The story of Donald Trump's presidential Inauguration, told through the medium of scat painting




Long-exposure photography compared to image-stacking video frames (ImageMagick/FFmpeg)



Pictured above: comparisons of images made from a segment on "Good Mythical Morning" involving "light painting". In the top-left, a 30-second exposure from a still-camera in the studio. Below it, an image made using ImageMagick's '-evaluate-sequence' function, on all frames taken from the 30 seconds of video. In this case, the 'max' setting was used, which stacks maximum pixel values. In the top-right, a single frame from the video, and below it, 100-frames stacked with FFmpeg using sequential 'tblend' filters.

# ImageMagick - Use with extracted frames or FFmpeg image pipe (limited to 4GB)
 convert -limit memory 4GB frames/*.png -evaluate-sequence max merged-frames.png

# FFmpeg - Chain of tblend filters (N.B. inefficient - better ways to do this)
ffmpeg -i video.mp4 -vf tblend=all_mode=lighten,tblend=all_mode=lighten,... 
As a comparison, here is an image made from the same frames but using 'mean' average with ImageMagick.



A video demo for the FFmpeg version


source video: https://www.youtube.com/watch?v=1tdKZYT4YLY&t=2m4s