Photometric Flicker Metrics: Analysis and Perspectives

Abstract—This paper complement another presentation, and
focus mainly on the mathematical framework behind flicker
metrics. The purpose is to review the algorithms extracting the
features of the light level temporal fluctuations, and establish
some of their properties. As the signals from the physical world
will invariably be contaminated by noise, the robustness of the
metrics to approach the ”true” value will be assessed. By doing
so, alternatives will be studied.
Index Terms—Energy Saving Lamp (ESL); Compact Fluorescent
Lamp (CFL); Light Emitting Diode (LED); Photometric
Flicker; Flicker Percent (FP); Flicker Index (FI); Voltage Variation;
Voltage Quality

I. INTRODUCTION
Incandescent lamps powered by AC voltage exhibit flicker.
The explanation is that the thermal time constant of the
light emitting metal wire is in the timescale of milliseconds,
while one half-period equals 10 milliseconds on a 50 Hz
network. Instantaneous power fluctuations are translated into
temperature oscillations, leading to temporal fluctuations of
the emitted flux. The first flicker metrics approached those
oscillation as some amplitude modulation of a signal around
an average value. Such approaches were further reused on the
next technologies, fluorescent and discharge lamps. The advent
of Solid State Lighting Lighting introduced new challenges.
The most important in the authors opinion is that the on/off
modulation may be used as a way to regulate the average
current, leading to asymmetric square waves.
This approach based upon amplitude modulation only describes
phenomenon whose typical base frequency is at twice
the distribution network frequency. In the framework of Power
Quality as analysed by many authors [1], [2], [3], [4], [5], it
became obvious that low frequency disturbances of the voltage
envelope are translated into hum, or visible, slowly varying
changes of the light level. Another concern is that the lamp
driver, whose primary function is to absorb energy at the
network frequency and adapt it to the lamp characteristics[6].,
may use regulation loops which are improperly designed,
leading to oscillations without external driving frequency.
Drapela [7] developped a light flickermeter following the
IEC61000-3-3:2013 standard. In this approach, the temporal
light waveform is splitted using a series of low-pass and bandpass
filters, leading to a number of indexes. IEEE [8] defined
a safe operating area based upon Percent Flicker, also known
as Michelson contrast.

Yet those metrics are supposed to reflect how humans would
describe the temporal variations of the light flux. This means
a number of elements are cascaded: a photometric sensor,
an acquisition system performing the signal analog to digital
conversion, and then some algorithm associate a value to the
temporal data whose aim is to predict the perceptibility of
the phenomenon. More accurately, the output should be a
probability measure that usual observers will detect, find it
acceptable, or complain, or be affected by those temporal
fluctuations.
As explained, the research team performed an in-depth
analysis of the sensitivity of two flicker metrics using the GUM
approach. In this part, the emphasis will be put mostly on the
mathematics behind the characterisation metrics, and analyse
in a wide sense their meaning and their sensitivity to various
noise sources.

Fig. 1: Definition of flicker metrics according to IEEE Standard
1789-2015 [8].

II. THEORETICAL BACKGROUND
A. Definitions
IEC flicker definition requires to observe the temporal
values over a window not smaller than two seconds, sampled
at least at 20 kHz. Two metrics are defined from such data stream. The Percentual Flicker is defined as the Michelson
contrast of the luminous flux waveform:

where (t) is the waveform and T is the length of the
evaluation interval, being an integer multiple of the flicker
period. This way, FP does not account neither for frequency
nor for waveform shape.
The Flicker Index is defined as:

where

and T is an integer multiple of a period of fundamental frequency
of the analysed flux waveform. Graphical interpretation
of these definitions is in Fig. 1.
To probe further, let’s describe the luminous flux as an
amplitude-modulated signal:

where

  • y(t) is the time dependant measured value;
  • M is the mean value in the absence of modulating signal;
  • A is the amplitude of the pulsating component;
  • s(t) is a zero-mean perturbation of the base level with
  • upper level smax and lower level smin;
  • ꙍ and Ø are the pulsation and phase of the ”carrier”,
  • which in most cases is at twice the network frequency.

In the absence of noise, FP can be expressed as

In the particular case where smax is equal to –smin, this
can further be simplified as

From which a few properties can be inferred:

1) the numerator reflects the modulation depth, i.e. the
ratio between the modulating component and the base
amplitude;
2) the denominator expresses the amplitude of the base
signal;
3) the metric is insensitive to the shape of the perturbing
signal.

Moreover, if the modulation depth reaches beyond some point,
the minimum will tend towards zero, and by definition the FP
value will saturate at 100%. In the case of on-off modulation,
this metric will not provide any information on the duty cycle.

In the presence of noise, this approach can be considered as
a worst-case scenario. The observation window will contain
many maxima of the original signal, and the one with the
greatest positive perturbation will be chosen. The same is true
for the minimum, leading to the conclusion that such estimator
is biased as the minus sign in the numerator will amplify the
effect of the noise.
The analysis of the FI is straightforward thanks to the
linearity of the integral operator. The M term of eq. 5
contributes to the total area, but not to the term ꓥ1. Under
the hypothesis that the observation window encompasses an
integer number of periods and that s(t) is zero mean, the
pulsating term will not contribute to the global integral, while
all parts where s(t) is greater than zero will contribute to ꓥ1.
The numerator will thus reflect the interactions between the
modulating and modulated frequencies, including the shape of
the perturbing waveform, while the denominator is controlled
by the average value only. In the case of on-off signal with onvalue
A over Ƭ seconds, off-value 0 and period T, the indicator
is given by

If the modulation is symmetric, this value equals 1; if Ƭ
increases towards T, off-periods are called ”silences” and the
index tends towards zero, while when decreasing towards 0,
the on-periods are called ”pulses” and the metrics reaches one.
In the presence of noise, the whole integral will mitigate
the noise effect. But there will be some residual noise on
the average; this noise being coupled with ꓥ1 through two
paths. The first one is that an error on the surface, equal to the
average value multiplied by the width on the ꓥ1 interval, will
be added. Secondly, the width itself will change as a function
of the signal slope close to the average value. Furthermore, the
illustration of fig.1 shows a signal with a few clean transitions
around the average value. Once again, in the presence of
noise, there may be many transitions on short time scales. The
definition does not provide hints or explain how to mitigate
this effect.

B. Other aspects
So far it was hypothesised that the observed signal was a
mix of well defined frequencies. But real lamps may be more
complex than that. As explained, most of Solid State Lighting
ballasts use switching regulators for efficiency reasons, with
their own internal function generator. Over the year, the
research team at Laplace lab observed that any SSL driver
start by a AC to DC converter, followed by an energy storage
element as the conduction is discontinuous, which powers both
the lighting element and its controller. If the energy storage
element is insufficient, the LED may exhibit recurrent patterns
over one switching cycle. As explained earlier, the design of
the regulation loop may also present insufficient gain at low
frequencies, leading to self-sustained oscillations. And lastly,
even if those effects are not present, the regulation loop may
also be inadequate at rejecting the voltage oscillations after the AC to DC converter. This occurs mostly with single-stage
drivers associated to small size lamps.

C. Physiological considerations
The translations of light stimuli into an image of the scene
as perceived by the brain is quite a complex process. Let’s
just point out a few elements:
1) the eye pupil adapts itself to the average scene luminance;
2) the cells acting as light sensors have some remanence:
after being hit by a photon, it takes a few milliseconds
to be ready to produce a new signal;
3) the eye themselves actively scan the scene; one single
view in the brain results usually from the merging of
around 20 partial images.
In the authors opinion, a flicker metric should take into account
three parameters: the average light level, the frequency, and
the wave patterns. Short light pulses will briefly produce an
excitation of the light detectors; while short light silences will
produce long excitation followed by a pause.

D. Improving actual metrics
At this point, it has been established that FI is can effectively
reject noise provided an integer number of periods can be
observed, which may be difficult to ensure in the presence of
switching regulators.
The sensitivity of FP to noise could be reduced according
to two ways:
1) focusing on the signal envelope. But this relies on
Hilbert’s transform, with the effect of squaring the noise
and introducing bias;
2) using synchronous detection which will bring the envelope
back at DC. But this is only applicable is the carrier
can be reconstructed.


III. PROBABILISTIC SIGNAL DESCRIPTION
As pointed out many times, the two studied metrics are
based on assumptions such as the ability to represent the
basic signal as periodic and sampling an integer number of
periods. Those assumptions are not universal, in particular
with SSL lighting. We would like to present an approach
of signal characterisation using a field of mathematics called
computational topology.
Let start by a simple example. Close to the sea there is
a rock with is covered at high tide and emerging at low
tide. When the sea retreats, a first peak appear and gradually
transform into an island. Then other peaks appears, together
with other islands separated by straight. At some point, the
straight becomes free of water: two islands are merging. By
convention, the earliest appearing island is said to absorb
the other one, which is said to ”die”, i.e. stopping being an
island. This process can be performed when water retreats,
focusing on local maxima, or goes up, focusing on local
minima. To each peak is associated a persistence level, defined
as the height difference between birth and death levels. The
highest peak has the greatest persistence, which corresponds to the signal peak-to-peak range. The temporal information
is irrelevant. The signal is then represented in a 2D diagram
where points are associated to the (birth, death) values.

IV. EXTENDING FP USING FOURIER TRANSFORM

As explained, the FP metric was designed in the perspective
of characterizing amplitude-modulated signal. As already
pointed out, as such it is insensitive to the perturbing signal
shape. In the presence of on-off modulation with light extinction,
this metric reaches a maximum of 100%, irrespective of
the actual duty cycle.
We propose an approach based upon the Fourier transform
in a similar fashion to the recently introduced Stroboscopic
Visibility Measure metric. Let’s define its objectives as:

  • for consistency, the value for small-signal amplitude modulation where the envelope is a sine should be equal to the actual FP computation;
  • the computation should mitigate the influence of wideband white noise;
  • in the case of small-signal AM where the envelope is rectangular, the result should be slightly influenced by the duty cycle
  • For on-off modulation, the value should be small for ”silences” and higher for pulses.

This can be accomplished as follows. Let Y (n) represent
the Discrete Fourier Transform of the temporal signal x(n),
0 <= n < N where N is the number of samples. Y (0) is
equal to the DC value, or the signal average. The extended FP
metric is simply defined as

where N1 and N2 are two suitably chosen values such that:

  • frequencies below N1 are considered as irrelevant. Due to windowing effect, there will be some ripple around the peak at Y (0). N1 should be just above this peak width.
  • frequencies above N2 are considered as irrelevant for flicker computation. One such limit could be 3 kHz as defined in IEEE Std-1789 [8].

V. RESULTS
A. Computational topology
To illustrate those methods, let start from a noised AM
signal as illustrated on fig. 2 with 20000 samples. The observation
window runs over 1000 milli-seconds of a base signal at
100 Hz. the AC peak value is one tenth of the DC component
amplitude, leading to a FP of 10%. The four peaks with the
greatest extent are labelled by their number. Red circles are
local minima; green circles are local maxima. The extrema are
associated to their extent in fig. 3 by the red lines, while the
green horizontal segments link the birth and death places.
The phase diagram of fig. 4 is built around one diagonal line.
Peaks are represented by their (birth, death) values. There is a
group of ten values in the lower right corner and another group
in the upper left, indicating ten periods have been recorded.

This cloud of points corresponds to values occurring in the
vicinity of the noise-free extrema. The points in close to the
diagonal are characterised by a small extent and are thus
mostly occurring due to noise. This diagram also permits to
establish the distribution of the extrema.
Lastly, this process was repeated by changing the noise level
on a logarithmic scale. When no noise is present, there are ten
peaks; the noise signal by itself contains around 400 peaks.
The number of peaks appears as an indicator of the signal to
noise ratio as illustrated in Fig. 5.