Extracting Light Curves from RunCam NightEagle Astro video files
The
RunCam NightEagle Astro video camera presents two differences from most
of the other cameras used for occultation data. First, the sensor
is a rolling readout CMOS sensor. The rolling readout (or rolling
shutter) aspect is very different from the interlaced CCD sensors in
most of our other cameras. With a rolling readout sensor, there
are no true "field" exposures. Assuming that the star images are
only a few scanlines (pixels) wide, then the star is exposed for a full
1/30 second frame (1/25 sec in PAL mode). So the data should be
evaluated as FRAMES (not fields). In addition, the NightEagle
Astro has a misalignment issue with the video fields in the analog
video output signal. This analog video ouptut is divided into
frames and each frame is further divided into two fields.
Normally, the data for each frame of an analog video output corresponds
to the data from one "exposure" of the camera's sensors. However,
this is not true for the NightEagle Astro video output. With this
camera, the fields are shifted by one place. To reconstruct the
actual frame exposure from the CMOS sensor, we must shift the fields in
the video output by one position.
Step 1: Install avisynth
I am currently using the newer Avisynth+ but the older versions should
work as well. Here is a link to the releases page for Avisynth+
Releases
· AviSynth/AviSynthPlus (github.com)
I installed the MSVC 2019 redistributable version ( the one with the *_vcredist.exe postfix).
Step 2: setup the script
Below is an AviSynth script which will "correct" the video output by
shifting the fields one position. Place this script (between the
dashed lines) in a text file with the extension *.avs and change the
"c:\tmp\myvideo.avi" file name to point to your video file.
Step 3: Analyze the video in FRAME mode
Now you will open a corrected video file and analyze the video FRAMES
(not fields) with your light curve extraction program (e.g. LiMovie,
Tangra, PyMovie). And you have two choices for how to proceed.
Option A:
Open the *.avs script file with your video program (LiMovie, Tangra,
PyMovie) and start your analysis in FRAME mode to generate a light
curve.
Option B:
Open the *.avs script file with VirtualDub and create a copy of your
video file. The copy of your video file will be "corrected" for
the field shift issue and you can open this corrected video file in
your video analysis program (e.g. LiMovie, Tangra, PyMovie) to generate
a light curve.
Step 4: Review the Light Curve to extract timings
When reviewing the light curve, the data points will represent FRAMEs
with a timing of 1/30 second (NTSC) between the data points.
------------------start of avisynth script ---------------------
# This filter attempts to fixup the framing issue in a video file from the NightEagle Astro video camera
# Given the following sequence of frames/fields
# Frame 0: Field 1
# Frame 0: Field 2
# Frame 1: Field 1
# Frame 2: Field 2
# This filter generates the following NEW ordering by replacing Field 1 of a frame with Field 1 from the NEXT frame
# New Frame 0: Field 1 = [Frame 1: Field 1]
# New Frame 0: Field 2 = [Frame 0: Field 2]
# New Frame 1: Field 1 = [Frame 2: Field 1]
# New Frame 1: Field 2 = [Frame 1: Field 2]
#
DirectShowSource("c:\tmp\myvideo.avi",pixel_type="RGB")
Function MoveField1( clip c) {
numFrames = Framecount(c)
# trim to avoid funky half frames from frame grabbers
#
cT = Trim( c, 2, numFrames - 1)
AssumeFrameBased( cT )
# SeparateFields puts field 2 first in new stream
#
fAll = SeparateFields(cT)
AssumeFieldBased( fAll )
# grab field 1 with offset forward one frame
# and field 2 from start
f1 = SelectEvery(fAll, 2, 3)
f2 = SelectEvery(fAll, 2, 0)
# combine fields to frame (with offset for field 1)
# f2 first since avisynth wants field 2 first
#
fNew = Interleave(f2, f1)
AssumeFieldBased( fNew )
cNew = Weave
return( cNew )
}
------------ end of script ------------------