Skip to main content
  • Research Article
  • Open access
  • Published:

Occupancy distribution estimation for smart light delivery with perturbation-modulated light sensing

Abstract

The advent of modern light-emitting diode (LED) techniques enables us to develop novel lighting systems with numerous previously unavailable features. Specifically, by using the fixtures for both illumination and to interrogate the space, source-to-sensor communication becomes possible at very low cost. In this paper, we present a novel framework to estimate the occupancy distribution in an indoor space using color-controllable LED fixtures (the same fixtures providing the illumination, simultaneously) and sparsely distributed non-imaging color sensors. By modulating randomly generated perturbation patterns onto the drive signals of the LED fixtures and measuring the changes in the color sensor responses, we are able to recover a light transport model for the room. Two approaches are proposed to estimate the spatial distribution of the occupancy, based on a light blockage model and a light reflection model, respectively. These two approaches, which can be combined, can faithfully reveal the occupancy scenario of the indoor space, while preserving the privacy of its occupants. An occupancy-sensitive lighting system can be designed based on this technique.

1Background

As we move from incandescent bulbs to fluorescent bulbs, and on to modern LED fixtures, lighting solutions are becoming more and more energy efficient. A new direction in lighting research is to develop smart lighting systems — lighting systems that can “think” and deliver the right light where and when it is needed. Most such lighting systems have a set of sensors to capture the occupancy information in the space. With knowledge of the room’s spatial occupancy distribution in (near) real time, a lighting system can adjust the spatial and spectral distribution to reduce energy consumption and enhance human comfort, well-being, and productivity.

In smart lighting systems, the sensors being used can be generally divided into two categories: imaging sensors and non-imaging sensors. The computer vision community usually employs imaging sensors such as cameras and depth sensors to capture images, videos, and depth maps of the scene. An image, whether gray-level, RGB, or depth, has a 2D structure, which describes the spatial distribution of objects or people in the space. A great deal of high-level information can be inferred from such data with computer vision and pattern recognition methods, which enable various applications such as object detection and tracking, event detection, and traffic surveillance.

However, in most lighting applications, human-readable high-resolution images are not only unnecessary, but also undesirable, as they present an information security concern. For example, when monitoring the occupancy of a room for the task of intelligent lighting control, we only need a very rough estimation about which part of the room is occupied. Using cameras will raise the concern of privacy — people just feel uncomfortable being monitored by a camera. If the security of the camera network is compromised, there is an even greater risk to privacy.

To ameliorate these concerns, non-imaging sensors offer a good alternative to cameras. In this paper, we propose using low-cost color sensors that are based on photodiodes and color filters. The output of a non-imaging color sensor is usually only a few numeric values measuring the local luminous flux of different colors, rather than a focused image. These sensors present no privacy concerns. However, due to the very limited information that can be obtained from such color sensors, it is very difficult to infer high-level information from the sensor readings. Estimating the 2D or 3D occupancy distribution in the space from a limited number of 1D sensor outputs is an ill-posed and extremely challenging problem.

Fortunately, the emergence of modern LEDs unlocks a new direction for us. Modern LED fixtures are controllable over each color channel, and allow rapidly changing input to drive these channels. The changes in the light can be sensed by photodiodes, which makes source-to-sensor communication possible, giving birth to new techniques such as visible light communication (VLC), or sometimes called light fidelity (Li-Fi) [1]–[6]. The idea of using visible light for both illumination and communication at the same time with the same fixture is often referred to as “dual-use lighting”. In our work, we measure the color sensor output under different lighting conditions. With repeated measurements, we can construct a model to describe the spatial transport of the light. Such a model captures rich information about the 3D space, and can be used to roughly estimate the occupancy distribution. With the estimated occupancy distribution, we can produce the lighting condition that best suits this occupancy scenario, such that we can improve energy efficiency, enhance human comfort, well-being, and productivity, and even elongate the lifespan of the LEDs and delay fixture replacement.

The remainder of this paper is organized as follows: In Section “Related work”, we review previous work related to our proposed technique. In Section “Testbed setup”, we introduce our testbed for experiments. In Section “Recovering the light transport matrix”, we describe how we solve for the light transport matrix in a lighting system. In Section “Perturbation-modulated lighting”, we introduce the perturbation-modulated lighting method, which is necessary for light transport sensing. Section “3D scene reconstruction with light blockage model” and Section “Floor-plane occupancy mapping with light reflection model” introduce the two approaches that we use for occupancy distribution estimation, based on wall-mounted sensors and ceiling-mounted sensors, respectively. Section “Results” reports the experimental results. Discussions are provided in Section “Discussions”, and Section “Conclusion” is the final conclusion.

2Related work

2.1 Occupancy-based lighting

A number of smart lighting systems have been designed to adjust the lighting condition according to the occupancy in the space, and there are various options for the occupancy sensor, from imaging sensors to non-imaging sensors. For example, in 1987, Rea and Jaekel used video systems, infrared, ultrasonic, and electric eyes to assess energy efficiency in lighting a staff room (6.0×8.8 m) [7]. In 1992, an imaging lighting control system called ImCon was proposed, which used a charge-coupled device (CCD) camera to monitor the occupancy in a test room (5.6×5.6 m) and control four fluorescent fixtures [8]. In 2009, Delaney et al. proposed using a network of passive infrared (PIR) sensors and light sensors to evaluate energy efficiency in lighting systems [9]. In 2010, Agarwal et al. proposed a smart building automation solution using a combination of PIR sensors and magnetic reed switch door sensors [10]. Recently, Aldrich et al. developed a lighting control application using networks of PIR sensors [11]. In 2010, Caicedo et al. looked into the problem of how to optimize the dimming levels of LED fixtures based on localized occupancy information [12]. A review paper by Guo et al. has comprehensively discussed different sensors that have been used in occupancy-based lighting control systems, including PIR sensors, ultrasonic sensors, audible sound sensors, microwave sensors, light barriers, video cameras, biometric systems, and pressure sensors [13]. Another review paper by Hassan et al. also discussed several occupancy detection techniques for lighting control applications, including PIR sensors, ultrasonic sensors, radio frequency identification (RFID), and cameras [14].

However, to the best of our knowledge, no prior work exists using non-imaging color sensors that are based on photodiodes and color filters, together with modulated illumination from the existing fixtures simultaneously providing light for the space, to implement occupancy-sensitive lighting control systems. Such color sensors can be built at very low cost. And since one color sensor only outputs a few numeric values, there is no privacy concern of using color sensors. We provide a comparison between non-imaging color sensors and imaging sensors such as webcams or Kinect in Figure 1. Comparisons between PIR, ultrasonic, microwave, video, and several other sensors can be found in [13].

Figure 1
figure 1

A comparison between imaging sensors and non-imaging color sensors that can be used for occupancy sensing in a smart lighting system.

The major difference between color sensors and other non-imaging occupancy sensors is that color sensors use visible light that is delivered by the fixtures, while PIR sensors use infrared, and ultrasonic sensors and audible sound sensors use sound waves. Also, a PIR sensor detects the infrared radiation emitted from an object, so it works well for detecting people or animals. The gradient of the change in the infrared field can be used to detect motions of people or animals, thus enabling applications such as burglar alarms and automatically-activated lighting systems. The color-sensor-based occupancy sensing technique proposed in this paper detects the change in the visible light field, which is often caused by blockage of light paths, or by changing reflection surfaces as people (etc.) move around the space. Thus, any object that affects the visible light field could be detected, rather than only objects that emit infrared radiation. Ultrasound sensors are active devices that emit ultrasonic sound waves and use the time interval between emitting and echoing to calculate the distance to objects. In contrast, color sensors cannot measure the distance, and they are passive sensors, although we actively add perturbations to the light from the existing fixtures. Ultrasonic sensors often suffer from false alarms while PIR sensors have more misses [13]. Audible sound sensors are seldom used in smart lighting systems because they are ill-suited to the problem. Environmental noise can cause a very high false alarm rate, and quiet occupants can cause a very high miss rate.

2.2 Light transport model

In many multi-source multi-sensor systems such as sonar, ultrasound, and scanning electron microscopes, the process of the system follows an affine relationship:

y=Ax+b,
(1)

where vector x is the input signals to all sources, and vector y is the measurements from all sensors. The matrix A can be understood as the coefficients of the process, and vector b is the systematic bias.

Specifically, the computer graphics community is very interested in visible light source and camera sensor systems, where it is often assumed that b=0, and the matrix A is often referred to as the light transport matrix. The light transport matrix is an effective tool for relighting real-world scenes (illuminate the scene with a virtual pattern as a post-process) [15]–[19], and can also be used to interchange the lights and cameras in a scene [20],[21], or be used for radiometric compensation [22].

In our smart lighting system, the vector x is the input to all LED fixtures, and the vector y is the measurements from all non-imaging color sensors. The lighting system may appear to be similar to a structured light system on first glance. However, we point out that there are several significant differences between the smart lighting problem as posed here and the structured light technique, which is often used to build models for computer graphics. First, in structured light, the source is usually a focused, high-resolution projector, projecting specific (sometimes complicated, but precise) structured light patterns onto the scene, and the sensor is usually a high-resolution camera [16],[17],[20]. However, in a smart lighting system, we usually only have a few fixtures and a few sensors due to the cost of hardware and installation. And the light itself is not structured at all, beyond the ordinary placement of fixtures in, for example, the ceiling. Thus, the light transport matrix A as used in, for example, computer graphics is usually very large, while in the smart lighting problem it is much smaller, and contains far less information about the space.

Second, in structured light, different pixels of the projector usually illuminate different non-overlapping regions of the scene. In contrast, in a smart lighting system, any fixture could conceivably illuminate the entire space, although different fixtures installed at different locations will have different luminous intensity distributions over the space. Besides, in structured light, different pixels in the image captured by the camera correspond to different non-overlapping regions of the scene. But a color sensor receives luminous flux from a very wide field of view, although with a spatial distribution function.

Further, unlike in computer graphics where people usually assume b=0, in a smart lighting system, the vector b is usually non-zero because it represents the sensor response to ambient light, such as sunlight or other external (uncontrolled) light sources.

2.3 3D reconstruction

Based on the estimated light transport matrix A, in this paper we propose two approaches to estimate the occupancy distribution: the light blockage model (Section “3D scene reconstruction with light blockage model”) and the light reflection model (Section “Floor-plane occupancy mapping with light reflection model”). The light blockage model is based on wall-mounted sensors, and results in 3D volumes. The light reflection model is based on ceiling-mounted sensors, and results in 2D maps (projections onto the floor plane).

The first approach, 3D scene reconstruction with a light blockage model, is closely related to existing work in the medical imaging, computer vision, robotics, and wireless sensor network literature. In medical imaging, techniques for 3D volume data reconstruction from projections include Fourier Slice Theorem based methods [23],[24], Algebraic Reconstruction Techniques (ART) [25], statistical methods [26], and total variation based methods [27]. In computer vision, people are interested in estimating the visual hull of a 3D object using 2D images [28],[29]. In robotics, an interesting problem is obstacle/object mapping — computing a spatial map to represent the obstacles or objects in the environment [30]. In wireless sensor networks, a related technique is Radio Tomographic Imaging (RTI), which uses the attenuation in received signal strength (RSS) caused by physical objects to create an image [31].

Reconstructing the 3D scene in a fixture-sensor smart lighting system is a very different problem from all the above mentioned work. In medical imaging (for example, computed tomography) multiple radiation sources (X-rays, for example) and sensors are typically rotated around the object to create numerous lines, and 3D images can be acquired slice by slice. In robotics, robots can move in the environment to sense at different locations. However, in a smart lighting system, all fixtures and sensors are firmly installed in the room and should not be moved during operation. Besides, the number of sensors is usually very small, unlike the visual hull problem in computer vision [28],[29], where an image has many pixels. Further, as we have discussed, any fixture illuminates the entire space, albeit non-uniformly, and any sensor receives light from a wide field of view. Thus, the spatial information that is contained in the small light transport matrix in our problem is very limited. The 3D reconstruction from such little information is extremely ill-posed. We should expect very rough low-resolution reconstruction results in our problem. However, that’s all we need. Since our goal is to control the lighting condition in the room, rough reconstruction suffices for this task.

3Methods

3.1 Testbed setup

3.1.1 The smart space testbed

To implement and validate our ideas, we have established a Smart Space Testbed (SST). This room has one window and two doors, and is 85.5 inches wide, 135.0 inches long, and 86.4 inches high (Figure 2a). This testbed is equipped with twelve color-controllable LED fixtures mounted in the ceiling (Figure 2c). For each fixture, we can independently specify the intensity of three color channels: red, green, and blue. The input to each channel is scaled to lie in the range [ 0,1]. We use twelve Colorbug wireless optical light sensors by SeaChanger (Figure 2b) as the color sensors in these experiments. These sensors can be installed either on the walls (Figure 2d) or on the ceiling. The key component of this sensor is an array of color-filtered photodiodes. Each color sensor has four output channels: red, green, blue and white (unfiltered). We use the Robot Raconteur software [32] for communication: The software connects to the color sensors with Wi-Fi, and sends input signals to the fixtures via Bluetooth. This same testbed has been used for a number of other investigations, including lighting control algorithms [33]–[36] and visual tracking systems [37].

Figure 2
figure 2

The testbed setup. (a) The coordinate system of the room. (b) The Colorbug wireless optical light sensors by SeaChanger. (c) Twelve color-controllable LED fixtures illuminate the room from the ceiling. (d) Color sensors can be installed on the walls.

3.1.2 The occupancy-sensitive lighting system

The final goal of our system is to achieve occupancy-sensitive smart lighting. In other words, when the occupancy distribution in the room changes, the system should produce the lighting condition that best suits this occupancy scenario to maximize comfort, well-being, and productivity, and minimize energy consumption. In most cases, by “occupancy distribution”, we mean the number and spatial locations of people in the room. For this purpose, there should be a control strategy module and an occupancy sensing module, and they work in two alternating stages: the sensing stage and the adjustment stage (Figure 3). In the sensing stage, the occupancy sensing module collects the sensor readings to estimate the occupancy distribution; in the adjustment stage, the control strategy module decides what lighting condition should be produced based on the estimated occupancy distribution. The design of control strategies is beyond the scope of this paper. Here we focus on the occupancy sensing module.

Figure 3
figure 3

Two stages of the lighting system. (a) In the sensing stage, the occupancy sensing module collects sensor readings under different lighting conditions. (b) In the adjustment stage, the control strategy module uses estimated occupancy distribution to determine what base light should be produced.

3.1.3 Limitations of current testbed

The twelve LED fixtures in the smart space are 7′′ LED Downlight Round RGB (Vivia 7DR3-RGB) products from Renaissance Lighting, and these fixtures exhibit approximately 0.3 seconds delay between the input signals being specified and the desired lighting condition being produced. The current Colorbug sensors are commercial products, which are easy to install, but they are expensive, slow, and not customizable. Each color measurement from the Colorbug sensors takes a few seconds. Thus, due to the very limited performance of our current fixtures and sensors, we are not able to fully implement a real-time occupancy-sensitive lighting system. However, we do emphasize that ultrafast LEDs and photodiodes have been used for visible light communication [38]–[41], and these LEDs and photodiodes can also be used for occupancy sensing. The experiments in this paper, using our current fixtures and sensors, suffice as proof of concept and validation of methods.

3.2 Recovering the light transport matrix

Since the current configuration of our testbed has twelve LED fixtures with three channels each, the input to the system is an m 1=36 dimensional signal x. Because we have twelve color sensors, each with four channels, the measurement is an m 2=48 dimensional signal y. We have performed experiments to confirm that the affine relationship in Eq. (1) holds for our fixture-sensor system, where the matrix A is called the light transport matrix, and the vector b is the sensor response to the ambient light. If the affine relationship does not hold for certain fixtures or sensors, we can usually calibrate the fixtures or sensors to linearize the responses and make Eq. (1) hold.

The light transport matrix A is a very good signature of the occupancy distribution in the space, since it is independent of the fixture input or the ambient light. Matrix A is only dependent on the light transport of the scene, such as diffuse reflection, specular reflection, interreflection, and refraction [22]. Thus, by analyzing matrix A, we can extract spatial information about the scene.

3.2.1 Light transport in projector-camera systems

Efficient acquisition methods of the light transport matrix A have been extensively studied by the computer graphics community. This is because due to the high dimensionality of vector x and vector y, the light transport matrix A is usually very large in a projector-camera system. Thus the process of taking sufficient photos to recover A would be very slow. Efficient light transport sensing methods based on compressed sensing techniques have been studied by Sen et al.[21] and Peers et al.[18]. Wang et al. proposed a kernel Nyström method to efficiently reconstruct a low rank approximation of matrix A. O’Toole et al. presented a low rank approximation solution using an optical implementation of Arnoldi iteration: project the photo captured by the camera to the scene iteratively [19].

3.2.2 Light transport in fixture-sensor systems

The efficient methods mentioned above are interesting. However, a smart lighting system is very different from a projector-camera system. We cannot apply an arbitrary lighting condition onto the space to acquire light transport information: a smart lighting system is built for a space where people live and work, and we must ensure their comfort. The good news is that, we can still change the lighting condition, but with very small changes that are imperceptible to the room’s occupants. Also, since a smart lighting system only has a few fixtures and a few sensors, the light transport matrix A is much smaller than in a projector-camera system. Since modern LEDs and photodiodes can operate very fast (so fast that they can be used for communication at megabit per second [38] or even gigabit per second [39]–[41] data rates), sufficient measurements can be acquired within a very short time period, during which we can assume both the occupancy distribution and the ambient light conditions are unchanged. We refer to this as the quasi-static assumption.

Eliminating b.

To eliminate the ambient light response from Eq. (1), we proceed as follows. We first set the LED input to a reference level x 0, and the output of the sensors is

y 0 =A x 0 +b.
(2)

Now if we add a small perturbation δ x to the input, the new output becomes

y 0 +δy=A x 0 + δx +b.
(3)

By simple subtraction, we can eliminate b, and get

δy=Aδx.
(4)

In our smart lighting system, we call x 0 the base light, which is determined by the control strategy module. We call δ x a perturbation, which will be discussed in Section “Perturbation-modulated lighting”. Depending on the desired lighting conditions and possible changes in the room occupancy, x 0 may be adjusted over time — but not during sensing.

Solve for A.

If we can apply different perturbations to the fixtures very fast, and also read the sensor readings very fast, we can make many measurements within a very short time period, during which we can assume both matrix A and vector b do not change. Thus, if we measure y 0 once, and measure y 0+δ y multiple times with different δ x, we get a linear system to solve for A. In other words, we perturb the input to the LED fixtures x 0 with different m 1-dimensional signals δ x 1,δ x 2,…,δ x n , and measure the m 2-dimensional changes of the sensor readings δ y 1,δ y 2,…,δ y n . Let X=[δ x 1,δ x 2,…,δ x n ] and Y=[δ y 1,δ y 2,…,δ y n ], where X m 1 × n and Y m 2 × n . Now the problem becomes a linear system Y=A X, which is very similar to the light transport problem in computer graphics.

With modern LEDs and rapid-response color sensors, we can usually make many measurements in a short time period to ensure n>m 1. Thus this overdetermined linear system can be simply solved by the Moore-Penrose pseudo-inverse:

A=Y X T X X T 1 ,
(5)

which corresponds to minimizing the Frobenius norm of the error:

min A ||YAX| | F .
(6)

If under some circumstances n is smaller than m 1, then Y=A X is an underdetermined system. Then other methods such as recursive least squares (RLS) [35],[42], low rank approximation, or sparse approximation [43] can be used. In our problem, we can always make enough measurements to ensure n>m 1 and use the simple pseudo-inverse method.

3.3 Perturbation-modulated lighting

3.3.1 Perturbation modulation

As introduced in Section “The occupancy-sensitive lighting system”, the smart lighting system works with two alternating stages: sensing and adjustment. During the sensing stage, perturbations δ x are added to the base light x 0, and δ y is measured. Then in the adjustment stage, matrix A is computed, the occupancy distribution is estimated, and the control strategy module gradually changes the base light to a new one (if necessary), which is determined according to the estimated occupancy distribution. In such a system, the base light changes slowly over a large range, while the perturbation changes quickly, and ideally imperceptibly, within a small range (Figure 4).

Figure 4
figure 4

The concept of perturbation-modulated lighting.

3.3.2 Requirements for perturbation patterns

To accurately recover the light transport matrix while also ensuring the comfort of the occupants of the space, we specify three requirements on the perturbation patterns:

  1. 1.

    The perturbation patterns must be rich enough in variation to capture sufficient information from the scene.

  2. 2.

    The magnitude of the perturbation must be small enough not to bother humans in the space.

  3. 3.

    The magnitude of the perturbation must be large enough to be accurately measured by the color sensors.

To meet the first requirement, randomly generated patterns usually suffice [44]. If we define the magnitude of the perturbation patterns as the maximum deviation from the base light ρ= max i ||δ x i | | , then the choice of ρ is a trade-off. We have performed sensitivity analyses, and listed some of the results in Figures 5 and 6. To study the sensor sensitivity, we add a sinusoid of a specific magnitude to one LED on one color channel, and we record the response of one sensor in the same color channel. Figure 5 shows the results of using the green channel. As we can see, based on a range of [ 0,1], when ρ is as small as 0.01, the sensor response is noticeably distorted; and as ρ gets larger, the sensor response becomes well-behaved (more linear). In Figure 6, we show four images of the room taken by a camera at different times during the perturbation interval for each ρ. We have observed that when ρ is large, the change of lighting can be very annoyinga. In our work, we set ρ=0.025 such that perturbations are not easily noticed, but can be accurately sensed by our current color sensors. Improved sensors will allow a larger range of acceptable ρ values.

Figure 5
figure 5

Sensitivity analysis: The response in the green channel of one color sensor to a sinusoid on the green channel on one LED fixture with different magnitudes.

Figure 6
figure 6

Sensitivity analysis: Images of the room under perturbation patterns of different magnitudes. Each row corresponds to one magnitude value ρ.

3.3.3 Perturbation ordering

Now assume that we have randomly generated n perturbation patterns δ x 1,δ x 2,…,δ x n with magnitude ρ. In the sensing stage, we apply these patterns to measure the changes in sensor output, and recover the light transport matrix A. Here one question arises: In what order should we arrange these perturbation patterns to maximize human comfort?

Studies on human visual systems have found thresholds needed to see flicker of different frequencies [45],[46]. Intuitively, we would say that we wish the light to change gradually, thus less noticeably. For gradual changes, we wish the neighboring perturbation patterns to be as similar as possible. Let (i 1,i 2,…,i n ) be a permutation of (1,2,…,n). Then δ x i 1 , δ x i 2 , , δ x i n is a re-ordering of the patterns (δ x 1,δ x 2,…,δ x n ). We naturally come to the following optimization problem:

min i 1 , i 2 , , i n | | δ x i 1 | | + k = 1 N 1 | | δ x i k + 1 δ x i k | | + | | δ x i N | | ,
(7)

where ||·|| is a chosen vector norm, usually the 2 norm.

The optimization problem in Eq. (7) has a very straightforward graph-theoretical interpretation. We create a weighted complete undirected graph G with n+1 vertices, where each perturbation pattern δ x i is a vertex, plus one vertex corresponding to the base light. The weight of an edge between two vertices is just the norm of the difference between the two corresponding perturbation patterns, where the perturbation pattern corresponding to the base light is all zeros. Finding the solution to problem Eq. (7) is equivalent to finding the shortest Hamiltonian cycle of G, or solving the famous NP-hard Travelling Salesman Problem (TSP) that has been intensely studied [47]. Thus, any existing TSP algorithm (e.g.[48]–[51]) can be used to solve Eq. (7). In our work, we use a very simple genetic algorithm [52], where the mutation of a genome (a Hamiltonian cycle) is simply cross-linking two randomly selected non-incident edges, as shown in Figure 7.

Figure 7
figure 7

Mutate a Hamiltonian cycle by cross-linking two non-incident edges.

3.4 3D scene reconstruction with light blockage model

In Section “Light transport in fixture-sensor systems”, we discussed how to obtain the light transport matrix in a fixture-sensor system. In this section, we introduce the first approach for estimating the occupancy distribution using the light transport matrix A. This approach requires the color sensors to be installed on the walls of the room. In Section “Floor-plane occupancy mapping with light reflection model” we will introduce a second approach, which can use ceiling-mounted color sensors.

3.4.1 Light blockage model

Let the light transport matrix of an empty room b be A 0. At run time, the light transport matrix is A, and we call E=A 0A the difference matrix. Matrix E is also m 2×m 1, and each entry of E corresponds to one fixture channel and one sensor channel. If one entry of matrix E has a large positive value, it means that the total flux is significantly attenuated, that is, many of the light paths from the corresponding fixture to the corresponding sensor are very likely blocked. With all sensors mounted on the walls, from any given fixture to any given sensor, there are numerous diffuse reflection paths and one direct path, which is the line segment connecting the fixture and the sensor (Figure 8a). Obviously, the direct path is the dominating path, if one exists. Thus, a large entry of E may most likely imply the corresponding direct path has been blocked due to the change of occupancy distribution.

Figure 8
figure 8

Explanation of light paths. (a) Light paths from one fixture to one sensor. (b) Intersecting blocked light paths imply blockage at their intersection.

3.4.2 Aggregation of E

Though each entry of E corresponds to one direct path, the opposite is not true, since each LED fixture or sensor has multiple channels. Assume the number of LED fixtures is N L , and the number of sensors is N S . We aggregate the m 1×m 2 matrix E to an N S ×N L matrix Ê, such that the mapping from the entries of Ê to all direct paths is a bijection. In our experiments, m 1=3N L =36 and m 2=4N S =48. The aggregation is performed on each fixture-sensor pair as a weighted summation over three color channels: red, green, and blue. This can be formulated as:

Ê i , j = w R E 4 i 3 , 3 j 2 + w G E 4 i 2 , 3 j 1 + w B E 4 i 1 , 3 j ,
(8)

where the weights w R, w G, and w B can be used to compensate for the different sensitivities of the sensors on different color channels.

3.4.3 Reconstruction algorithm

After aggregation, if Ê has a large entry at (i,j), then we believe the direct path from fixture j to sensor i is very likely blocked, though we are still not sure where the blockage happens along this path. Making the reasonable assumption that any occupants will have large cross sections relative to the thickness of a light path, any position that is close to this direct path is also likely being occupied. If two or more such direct paths intersect or approximately intersect in the 3D space, then it is most likely that the blockage happens at their intersection, as shown in Figure 8b.

Based on this assumption, we now describe our 3D reconstruction algorithm. For any point in the 3D space, we estimate the confidence that this point is being occupied. Let P be an arbitrary point in the 3D space, and d i,j (P) be the point-to-line distance from point P to the direct path from fixture j to sensor i. The confidence of point P being occupied is C(P), which is computed by:

C(P)= i = 1 N S j = 1 N L Ê i , j G d i , j ( P ) , σ i = 1 N S j = 1 N L G d i , j ( P ) , σ ,
(9)

where G(·,·) is the Gaussian kernel:

G(a,σ)=exp a 2 2 σ 2 .
(10)

The denominator in Eq. (9) is a normalization term for the non-uniform spatial distribution of the LED fixtures and the sensors. The parameter σ is a measure of the continuity and smoothness of the occupancy, and should be related to the physical size of the occupants we expect. For simplicity, we assume σ is isotropic. If we discretize the 3D space and evaluate Eq. (9) at every position P(x,y,z), we can render a 3D volume V(x,y,z)=C(P(x,y,z)) of the scene, which can then be visualized.

3.4.4 Connection with Radon transform

Our 3D reconstruction method is partially inspired by the well-known Radon transform, or more precisely, the inverse Radon transform, which has been successfully applied to the reconstruction of computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission computer tomography (SPECT), and even radar astronomy [53],[54]. Given a continuous function f(x,y) on 2 , its Radon transform is a function defined on each straight line L={(x(t),y(t))} in 2 :

(11)

Since a straight line can be uniquely defined by two parameters, is also a function on 2 . The original function f can be reconstructed by the inverse Radon transform, which comprises a ramp filter and a back-projection. An example is shown in Figure 9. In our reconstruction algorithm Eq. (9), the denominator corresponds to the ramp filter, and the summation over all direct light paths corresponds to the back-projection.

Figure 9
figure 9

Radon transform on the Shepp-Logan phantom [[55]].

As discussed in Section “3D reconstruction”, unlike a tomography problem where the sampled lines are very dense, in our smart lighting problem, with only twelve LED fixtures and twelve sensors that are fixed during the measurements, the direct light paths are very sparse (Figure 10), which makes reconstruction much more challenging than other problems that could be solved by a standard Radon transform. Thus, we can only expect very rough reconstruction results (but that’s all we need or want), and a simple algorithm like Eq. (9) should suffice.

Figure 10
figure 10

144 direct light paths in the testbed for wall-mounted sensors.

3.5 Floor-plane occupancy mapping with light reflection model

The 3D scene reconstruction approach introduced in Section “3D scene reconstruction with light blockage model” is based on a light blockage model, thus requiring that all sensors be installed such that direct light paths exist for all fixture-sensor pairs, and are easily blocked by occupants. Practically speaking, this means the sensors are mounted on the walls. If the sensors are installed on the ceiling, there will be no direct light path from fixture to sensor. In this case, we only have reflection paths. With all sensors and fixtures in the same plane, we no longer have any spatial information about the z-axis direction (see Figure 2a for the spatial coordinate system of the testbed). In this section, we introduce our second occupancy distribution estimation approach, which models the light transport for ceiling-mounted sensors using geometrical optics and photometry analysis.

3.5.1 Photometry for fixtures and sensors

Before we describe our light reflection model, we need to revisit our fixtures and sensors. What physical quantities should we use to describe the fixture input and the sensor output? In photometry, we use luminous intensity to measure the power emitted by a light source in a particular direction per unit solid angle. A numeric value read from a sensor is luminous flux, which measures the perceived power of incident light.

For a light fixture, the luminous intensity is non-isotropic. For example, the polar luminous intensity distribution graph of our Vivia 7DR3-RGB fixture is provided in Figure 11. Let the luminous intensity in the normal direction be I max. Then in the direction with angle θ to the normal direction, the luminous intensity can be written as I max·q(θ).

Figure 11
figure 11

The polar luminous intensity distribution of our LED fixtures.

3.5.2 Light reflection model

With the color sensors installed on the ceiling, what does a large entry in the aggregated difference matrix Ê (see Section “Aggregation of E”) signify? It still means that the light paths from the corresponding fixture to the corresponding sensor are affected. Though these light paths are all diffuse reflection paths, we can still have a rough estimation of areas in the room that are more likely being occupied than other regions. For this purpose, we consider a very small area d s 1 on the floor plane and one fixture-sensor pair. As shown in Figure 12, the fixtures are illuminating the room downward, and the color sensors are “looking” downward. We assume that the sensing area of the color sensor is d s 2, the angle of the light path from the fixture to d s 1 is θ 1, the angle of the light path from d s 1 to d s 2 is θ 2, the distance from the fixture to d s 1 is D 1, and the distance from d s 1 to d s 2 is D 2. We also assume that d s 1 is an ideal matte Lambertian surface with albedo α.

Figure 12
figure 12

The light reflection model.

First, we consider the luminous flux arriving at d s 1 from the fixture. The luminous intensity along the light path from the fixture to d s 1 is I max·q(θ 1), and the solid anglec is d s 1 cos θ 1 4 π D 1 2 . Thus the luminous flux arriving at d s 1 is the product of the luminous intensity and the solid angle:

Φ 1 = I max ·q θ 1 · d s 1 cos θ 1 4 π D 1 2 .
(12)

Since the albedo of d s 1 is α, the luminous intensity of the reflected light from d s 1 in the normal direction is proportional to α Φ 1. For simplicity, we just use α Φ 1 to denote the luminous intensity in the normal direction. Since d s 1 is a Lambertian surface, the luminance of the surface is isotropic, and the luminous intensity obeys Lambert’s cosine law. Thus the luminous intensity of the reflected light along the light path from d s 1 to d s 2 is α Φ 1 cosθ 2. The solid angle from d s 1 to d s 2 is d s 2 cos θ 2 4 π D 2 2 . Thus finally, the luminous flux arriving at d s 2 from the fixture and reflected by d s 1 is:

Φ 2 = α Φ 1 cos θ 2 · d s 2 cos θ 2 4 π D 2 2 = α I max · q ( θ 1 ) · d s 1 cos θ 1 4 π D 1 2 · cos θ 2 · d s 2 cos θ 2 4 π D 2 2 .
(13)

For all fixtures, I max and the function q(·) are the same. For all sensors, d s 2 is the same. For different positions on the floor plane, we assume the albedo α is constant, and use a d s 1 of the same area. Then Φ 2 is a function of the position (of d s 1) on the floor plane:

Φ 2 =K·q θ 1 · cos θ 1 cos 2 θ 2 D 1 2 D 2 2 ,
(14)

where K=α I max d s 1 d s 2 16 π 2 is a constant independent of position, and all fixture-sensor pairs share the same K value. θ 1, θ 2, D 1 and D 2 are all dependent on the position.

3.5.3 2D confidence map

Intuitively, if there is a large entry in matrix Ê, then we can find the corresponding fixture-sensor pair, and compute Φ 2 at all positions on the floor using Eq. (14). Larger Φ 2 values indicate regions more likely to be occupied.

Based on this intuition, we can pre-compute Φ 2 at all positions for all fixture-sensor pairs offline. We call the pre-computed Φ 2 at all positions the reflection kernel of the corresponding fixture-sensor pair. Two reflection kernels are displayed in Figure 13 as examples.

Figure 13
figure 13

The reflection kernels of two fixture-sensor pairs.

Let the reflection kernel for fixture j and sensor i be R i,j . Then a 2D confidence map can be simply computed as a weighted sum of all these reflection kernels:

C= i = 1 N S j = 1 N L Ê i , j R i , j .
(15)

Unlike the light blockage model in Section “3D scene reconstruction with light blockage model”, where a 3D volume is reconstructed, here we can only estimate a 2D confidence map. In this confidence map, a pixel represents the confidence that the corresponding point on the 2D floor plane is being affected by occupants. It can be either affected by a person standing at that point, or affected by the shadow of a person.

We can also modify Eq. (15) to:

C= i = 1 N S j = 1 N L Ê i , j λ 1 R i , j i = 1 N S j = 1 N L R i , j λ 2 ,
(16)

such that the parameter λ 1≥1 will encourage large entries in Ê to sharpen the resulting confidence map, and the normalization parameter λ 2≥0 can ameliorate distortions in the resulting condifence map caused by the non-uniform spatial distribution of fixtures and sensors. Eq. (15) is a special case of Eq. (16), where λ 1=1 and λ 2=0.

4Results

4.1 3D reconstruction results with light blockage model

To validate the first approach introduced in Section “3D scene reconstruction with light blockage model”, we divide the smart room into six regions, and create nine occupancy scenarios by occupying one or two regions with people and furniture. We discretize the 3D space to voxels of size 1×1×1 inch 3, and render 3D volumes of size 87×136×88. For the Gaussian kernel, we set σ=20.0 inches. In the sensing stage, n=40 perturbation patterns are used. The spatial coordinates of the twelve LED fixtures and the twelve color sensors are listed in Table 1, and can be visualized in Figure 10.

Table 1 The spatial coordinates (in inches) of the twelve LED fixtures and the twelve color sensors installed on the walls/ceiling in the testbed

4.1.1 Reconstructed volumes

In Figure 14 we show the results for scenarios where only one region is occupied, and in Figure 15 we show the results where two regions are occupied. It is interesting to see that although the precision of the reconstructed volume is very low, the reconstruction quality is good enough for the lighting control module to determine which part of the room is occupied, and what kind of light should be delivered. If better reconstruction quality is required, one simple solution is to increase the number of color sensors. This becomes part of the system design process for an operational smart space. However, since our goal is only roughly estimating the occupancy distribution such that we can decide what lighting condition should be produced, we do not need high-resolution and high-quality 3D volumes (further discussed in Section “The quality of the estimation”).

Figure 14
figure 14

3D reconstruction results with light blockage model for occupancy scenarios where only one region is occupied. Each row is one scenario. Column 1: a diagram of the ground truth scenario; Column 2: images captured by four cameras in the room during measurement; Column 3: the reconstructed 3D volume; Column 4: the integral of the reconstructed volume on z-axis, to be compared with the ground truth.

Figure 15
figure 15

3D reconstruction results with light blockage model for occupancy scenarios where two regions are occupied. Each row is one scenario.

4.1.2 Complexity analysis and accelerations

Assume the number of voxels in one volume is N P . The number of direct light paths is N L ·N S . To render one volume, we have to evaluate Eq. (9) for N P voxels, and the number of operations is N P ·N L ·N S in total. In one operation, we need to compute the point-to-line distance and the Gaussian kernel. In our experiments, N P =87×136×88, N S =12, and N L =12. Thus the number of operations is about 150 million. Our rendering algorithm is implemented in C++. On a Macintosh with 2.5 GHz Intel Core i5 CPU and 8 GB memory, the direct algorithm takes about 18 seconds to render one volume.

One way to accelerate the rendering is to pre-compute the point-to-line distances and the Gaussian kernels, and keep them in memory. When rendering a new volume, we still need to perform N P ·N L ·N S operations, but each operation is simply one multiplication and one addition. In this way, on the same machine, pre-computation takes about 18 seconds, but rendering each volume takes only 2 seconds. One trade-off is that such a hashing-based optimization uses much more memory. If each Gaussian kernel is stored as a 64-bit double-precision floating point number, then it requires about 1 GB memory to keep 150 million Gaussian kernels. To further accelerate the rendering to achieve real-time performance, either parallel computing on a GPU could be used, or the number of voxels could be reduced by downsampling.

4.2 Floor-plane confidence maps with light reflection model

For the second approach introduced in Section “Floor-plane occupancy mapping with light reflection model”, we place all color sensors on the ceiling. Each sensor is installed close to one LED fixture. The spatial coordinates of the fixtures and the sensors can be found in Table 1. Again, we create nine occupancy scenarios by occupying one or two regions with human and furniture, and discretize the 2D floor plane to pixels of size 1×1 inch 2. The confidence maps computed with Eq. (15) for the nine occupancy scenarios are shown in Figures 16 and 17. We can see that the resulting 2D confidence maps are basically correct when compared to the ground truth — we can see which regions in the room are being occupied. When compared with the results in Figures 14 and 15, we find that the 3D scene reconstruction approach introduced in Section “3D scene reconstruction with light blockage model” produces better estimations than the light reflection model here. This is because in the light blockage model based approach, the color sensors are installed on the walls, thus the z-coordinate information is well captured. But when the sensors are installed on the ceiling at the same height with the fixtures, the z-coordinate information is completely lost. Without such important information, the quality of estimation results is expected to drop.

Figure 16
figure 16

Floor-plane confidence maps computed with light reflection model for occupancy scenarios where one region is occupied. Each row is one scenario.

Figure 17
figure 17

Floor-plane confidence maps computed with light reflection model for occupancy scenarios where two regions are occupied. Each row is one scenario.

Since Eq. (15) is only a weighted summation of pre-computed reflection kernels, and both N L and N S are small, generating a 2D confidence map is very fast.

4.3 Quantitative evaluation

Due to the complexity of a real 3D scene, it is difficult to assess the reconstructed 3D volume or the estimated floor-plane 2D confidence map quantitatively. The ground truth is also difficult to represent accurately. To roughly compare the two different approaches, we generate the floor-plane ground truth of occupancy distribution for the nine scenarios by assuming a person or a chair is a disk with radius 10 inches on this plane, as shown in Figure 18.

Figure 18
figure 18

Manually generated floor-plane ground truth of occupancy distribution for the nine different scenarios. Each disk represents a person or a chair.

Once we have a 2D ground truth map, we can stretch it into a vector, and compute the correlation coefficient between the ground truth and an estimated 2D map. For the light reflection model, we simply use the floor-plane 2D confidence map estimated using Eq. (15) or Eq. (16). For the light blockage model, we use the z-axis integral of the reconstructed 3D volume as the floor-plane confidence map. The correlation coefficient lies in the range [ −1,1]. The larger the correlation coefficient is, the better the estimated occupancy map is. For each of the nine scenarios and each of the two approaches, we create multiple instances, and compute the average correlation coefficient of all instances, using different parameters. The mean value of the average correlation coefficients over all nine scenarios can be used as a final score, which we call the mACC (mean average correlation coefficient). Results are reported in Table 2. From this table we observe that the light blockage model has much better performance than the light reflection model. This is expected, because we lose all z-coordinate information when we mount all sensors on the ceiling. Even for the light blockage model, the correlation coefficient values are still mostly smaller than 0.5. This is also expected, partially due to the difficulty of accurately representing the ground truth, partially due to the challenge of the problem itself.

Table 2 The average correlation coefficient between the manually generated ground truth and the estimated occupancy map: for each of the nine scenarios and each of the two approaches

5Discussions

5.1 What is being sensed?

Regarding the novel color-sensor-based occupancy sensing technique introduced in this paper, the most significant question is: What is actually being sensed, compared to other techniques such as PIR or ultrasonic sensors? As we all know, PIR sensors are used to sense the infrared radiation, and ultrasonic sensors are used to measure distances. In our technique, our occupancy estimation is based on a difference matrix between the light transport of an empty room and the light transport of the current room, as described in Section “3D scene reconstruction with light blockage model”. This difference can be caused by either people or furniture. The “empty room” is not necessarily really empty — it is the room condition when the matrix A 0 is acquired, so more precisely it is the “reference room”. If the reference room is already being occupied, then either removing an occupant or adding a new occupant should produce a difference between the reference transport matrix A 0 and the current transport matrix A, and thus should be sensed. In a real application of this technique, there should be a calibration button for the user to manually set the present room condition as the reference room.

5.2 Aggregation of the difference matrix

In the two approaches introduced in Section “3D scene reconstruction with light blockage model” and Section “Floor-plane occupancy mapping with light reflection model”, respectively, we aggregate the difference matrix E=A 0A to a smaller matrix Ê, as discussed in Section “Aggregation of E”. When sensing the occupancy, we are only interested in where the occupant is; we are not concerned with which color channel the occupant affects more. However, this does not mean that the color information measured by the color sensors is not useful. The summation over all three color channels mitigates errors or noise in any single color channel. A system with only one single tunable channel, e.g. brightness-tunable white lighting system, will be much more vulnerable to inaccurate measurements.

5.3 Assumptions in the models

The light blockage model introduced in Section “3D scene reconstruction with light blockage model” assumes that a direct light path exists for any fixture-sensor pair. Thus, we install all sensors on the walls to make this assumption true. Apart from this assumption, we also assume that the direct light path is the dominating path, such that any changes in the diffuse reflection paths can be ignored relative to the changes in the direct light path. This assumption is mostly true, but may fail in some special cases. For example, when there is a large mirror in the room, there will be specular reflection paths. These specular reflection paths should be as important as direct light paths, and cannot be ignored.

The light reflection model also has several assumptions apart from assuming all sensors are mounted on the ceiling. First, since we did not consider any reflection by the walls, we are assuming the wall surface reflection can be ignored. Although the total surface area of the walls will almost always be larger than that of the floor, since all fixtures and all sensors are “looking down”, this assumption is acceptable. The second assumption is that the floor surface is Lambertian. This assumption does not have to be true, since if we know the surface is non-Lambertian, we simply need to modify Eq. (13) according to the surface property. Another assumption is that we assume the floor plane has uniform albedo. If the floor comprises roughly the same material, the albedo should be similar. However, if half of the floor is wool carpet and half is marble tile, then this assumption does not hold.

5.4 The quality of the estimation

Are the occupancy distribution estimation results shown in Section “Results” good enough? The answer to this question depends on the problem being solved. A researcher from the computer graphics or the tomography community may think the results given in Figures 14, 15, 16 and 17 are unimpressive. However, we are working in a very different regime, under very challenging constraints. To reconstruct a highly accurate high-resolution volume reproducing every wrinkle on a T-shirt of an occupant is not our goal. We are controlling the LED fixtures of a smart lighting system, for the purpose of energy efficiency, productivity, and human comfort. We are not producing 3D animations, we are not identifying who is in the room, and we are not using a stage light to follow a dancer precisely. We are simply controlling luminaires that people use everyday, such as in an office, a conference room, or a living room. Thus what we need to know is this: What areas of the room are occupied? We do not need, and for legal reasons should take care not to obtain or use, any information beyond that. Knowing more than needed will raise privacy concerns. Thus, for our smart lighting problem, the occupancy distribution estimation results shown in Section “Results” suffice for the task. We can improve the precision by introducing more sensors, if the reconstruction is too rough.

5.5 Better hardware

In Section “Limitations of current testbed”, we have explained that the limited performance of our current fixtures and sensors impedes us from implementing a real-time smart lighting system with the testbed, although they suffice for the validation experiments in Section “Results”. The current fixture has a delay between the input signals being specified and the desired lighting condition being produced. The current SeaChanger Colorbug sensors have an integration time during which color is measured, and a communication time for Wi-Fi handshaking and data transmission. In the future, faster LEDs will be installed to replace our current fixtures, and more customizable color sensors can be built using low-cost commercially available components. Instead of using Wi-Fi, directly wiring the sensors to the system should significantly reduce the communication delay. Also, we expect that in the future, the color sensor will often be built into the LED fixture circuit as a combined product. This will make it much easier to install, more aesthetically pleasing, and lead to a more affordable complete lighting solution.

5.6 Broader applications

In this paper we have discussed controlling the lighting condition in a space such as an office or a living room. We also point out that in the future this technique may apply to the lighting control of any indoor space. For example, controlling the lighting condition in a barn could improve agricultural productivity. Controlling the lighting condition in a sickroom could accelerate the healing process. We can also control the lighting condition in a hallway, a warehouse, or a large vehicle.

6Conclusion

We have presented a novel technique to estimate the occupancy distribution in an indoor space using color-controllable LEDs and sparsely distributed color sensors. This technique can be used to implement occupancy-sensitive, privacy-preserving smart lighting systems. The key idea is to modulate imperceptible perturbations onto the light, and measure the changes in the sensor output to recover a light transport matrix. Two approaches, based on the light blockage model and the light reflection model, respectively, are proposed to estimate the occupancy distribution using the light transport matrix. Due to the small number of fixtures and sensors, and the largely overlapping light fields between different fixture-sensor pairs, the occupancy distribution estimation problem is ill-posed and extremely challenging. The two approaches both produce results that suffice to infer the occupancy scenario in the space, but at the same time are coarse enough to protect the privacy of human residents.

7Endnotes

a Figure 6 needs to be viewed in color to be fully appreciated.

b The room may include furnishings. By “empty” we mean no occupants (humans, animals).

c The unit here is fraction of the sphere, not steradian.

References

  1. Elgala H, Mesleh R, Haas H: Indoor optical wireless communication: potential and state-of-the-art. Commun Mag IEEE 2011, 49(9):56–62. 10.1109/MCOM.2011.6011734

    Article  Google Scholar 

  2. Afgani MZ, Haas H, Elgala H, Knipp D: Visible light communication using OFDM. In 2nd International Conference on Testbeds and Research Infrastructures for the Development of Networks and Communities (TRIDENTCOM 2006), Barcelona, Spain: IEEE; 2006:6.

    Google Scholar 

  3. Komine T, Nakagawa M: Fundamental analysis for visible-light communication system using LED lights. IEEE Trans Consum Electron 2004, 50(1):100–107. 10.1109/TCE.2004.1277847

    Article  Google Scholar 

  4. Komine T, Nakagawa M: Integrated system of white LED visible-light communication and power-line communication. IEEE Trans Consum Electron 2003, 49(1):71–79. 10.1109/TCE.2003.1205458

    Article  Google Scholar 

  5. Tanaka Y, Komine T, Haruyama S, Nakagawa M: Indoor visible light data transmission system utilizing white LED lights. IEICE Trans Commun 2003, 86(8):2440–2454.

    Google Scholar 

  6. Little TD, Dib P, Shah K, Barraford N, Gallagher B: Using led lighting for ubiquitous indoor wireless networking. In Networking and Communications, 2008. WIMOB’08. IEEE International Conference on Wireless and Mobile Computing. IEEE, Avignon, France; 2008:373–378.

    Chapter  Google Scholar 

  7. Rea M, Jaekel R: Monitoring occupancy and light operation. Lighting Res Technol 1987, 19(2):45–49. 10.1177/096032718701900203

    Article  Google Scholar 

  8. Glennie W, Thukral I, Rea M: Lighting control: feasibility demonstration of a new type of system. Lighting Res Technol 1992, 24(4):235–242. 10.1177/096032719202400407

    Article  Google Scholar 

  9. Delaney DT, O’Hare GM, Ruzzelli AG: Evaluation of energy-efficiency in lighting systems using sensor networks. In Proceedings of the First ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings. ACM, Berkeley; 2009:61–66.

    Chapter  Google Scholar 

  10. Agarwal Y, Balaji B, Gupta R, Lyles J, Wei M, Weng T: Occupancy-driven energy management for smart building automation. In Proceedings of the 2nd ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Building. ACM, Zurich; 2010:1–6.

    Chapter  Google Scholar 

  11. Aldrich M, Badshah A, Mayton B, Zhao N, Paradiso JA: Random walk and lighting control. In Sensors, 2013 IEEE, Baltimore, MD, USA: IEEE; 2013:1–4.

    Chapter  Google Scholar 

  12. Caicedo D, Pandharipande A, Leus G: Occupancy-based illumination control of LED lighting systems. Lighting Res Technol 2011, 43(2):217–234. 10.1177/1477153510374703

    Article  Google Scholar 

  13. Guo X, Tiller D, Henze G, Waters C: The performance of occupancy-based lighting control systems: a review. Lighting Res Technol 2010, 42(4):415–431. 10.1177/1477153510376225

    Article  Google Scholar 

  14. ul Haq MA, Hassan MY, Abdullah H, Rahman HA, Abdullah MP, Hussin F, Said DM: A review on lighting control technologies in commercial buildings, their performance and affecting factors. Renewable Sustainable Energy Rev 2014, 33: 268–279. 10.1016/j.rser.2014.01.090

    Article  Google Scholar 

  15. Debevec P, Hawkins T, Tchou C, Duiker H-P, Sarokin W, Sagar M: Acquiring the reflectance field of a human face. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New Orleans; 2000:145–156.

    Google Scholar 

  16. Masselus V, Peers P, Willems YD: Relighting with 4d incident light fields. ACM Trans Graph (TOG) 2003, 22(3):613–620. New York: ACM New York: ACM 10.1145/882262.882315

    Article  Google Scholar 

  17. Wang J, Dong Y, Tong X, Lin Z, Guo B: Kernel Nyström method for light transport. ACM Trans Graph (TOG) 2009, 28(3):29. New York: ACM New York: ACM

    Google Scholar 

  18. Peers P, Mahajan DK, Lamond B, Ghosh A, Matusik W, Ramamoorthi R, Debevec P: Compressive light transport sensing. ACM Trans Graph (TOG) 2009, 28(1):3. 10.1145/1477926.1477929

    Article  Google Scholar 

  19. O’Toole M, Kutulakos KN: Optical computing for fast light transport analysis. ACM Trans Graph (TOG) 2010, 29(6):164.

    Google Scholar 

  20. Sen P, Chen B, Garg G, Marschner SR, Horowitz M, Levoy M, Lensch H: Dual photography. ACM Trans Graph (TOG) 2005, 24(3):745–755. 10.1145/1073204.1073257

    Article  Google Scholar 

  21. Sen P, Darabi S: Compressive dual photography. Comput Graph Forum 2009, 28(2):609–618. 10.1111/j.1467-8659.2009.01401.x

    Article  Google Scholar 

  22. Wetzstein G, Bimber O: Radiometric compensation through inverse light transport. In Pacific Conference on Computer Graphics and Applications, Maui, HI, USA; 2007:391–399.

    Chapter  Google Scholar 

  23. Bracewell RN: Strip integration in radio astronomy. Aust J Phys 1956, 9(2):198–217. 10.1071/PH560198

    Article  MATH  MathSciNet  Google Scholar 

  24. Kak AC, Slaney M: Principles of Computerized Tomographic Imaging. Society for Industrial and Applied Mathematics, Philadelphia; 2001.

    Book  Google Scholar 

  25. Gordon R, Bender R, Herman GT: Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and x-ray photography. J Theor Biol 1970, 29(3):471–481. 10.1016/0022-5193(70)90109-8

    Article  Google Scholar 

  26. Kole J: Statistical image reconstruction for transmission tomography using relaxed ordered subset algorithms. Phys Med Biol 2005, 50(7):1533. 10.1088/0031-9155/50/7/015

    Article  Google Scholar 

  27. Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Phys D: Nonlinear Phenomena 1992, 60(1):259–268. 10.1016/0167-2789(92)90242-F

    Article  MATH  Google Scholar 

  28. Laurentini A: The visual hull concept for silhouette-based image understanding. IEEE Trans Pattern Anal Mach Intell 1994, 16(2):150–162. 10.1109/34.273735

    Article  Google Scholar 

  29. Matusik W, Buehler C, Raskar R, Gortler SJ, McMillan L: Image-based visual hulls. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New Orleans; 2000:369–374.

    Google Scholar 

  30. Mostofi Y: Cooperative wireless-based obstacle/object mapping and see-through capabilities in robotic networks. IEEE Trans Mobile Comput 2013, 12(5):817–829. 10.1109/TMC.2012.32

    Article  Google Scholar 

  31. Wilson J, Patwari N: Radio tomographic imaging with wireless networks. IEEE Trans Mobile Comput 2010, 9(5):621–632. 10.1109/TMC.2009.174

    Article  Google Scholar 

  32. Wason JD, Wen JT: Robot raconteur: a communication architecture and library for robotic and automation systems. In IEEE Conference on Automation Science and Engineering (CASE), Trieste, Italy: IEEE; 2011:761–766.

    Google Scholar 

  33. Afshari S, Mishra S, Wen J, Karlicek R: An adaptive smart lighting system. In Proceedings of the Fourth ACM Workshop on Embedded Sensing Systems for Energy-Efficiency in Buildings. ACM, Toronto; 2012:201–202.

    Chapter  Google Scholar 

  34. Afshari S, Mishra S, Julius A, Lizarralde F, Wen JT: Modeling and feedback control of color-tunable LED lighting systems. In American Control Conference (ACC), Montreal, Canada: IEEE; 2012:3663–3668.

    Google Scholar 

  35. Afshari S, Mishra S, Julius A, Lizarralde F, Wason JD, Wen JT: Modeling and control of color tunable lighting systems. Energ Build 2014, 68 Part A(0):242–253. 10.1016/j.enbuild.2013.08.036

    Article  Google Scholar 

  36. Jia L, Afshari S, Mishra S, Radke RJ: Simulation for pre-visualizing and tuning lighting controller behavior. Energ Build 2014, 70(0):287–302. 10.1016/j.enbuild.2013.11.063

    Article  Google Scholar 

  37. Jia L, Radke RJ: Using time-of-flight measurements for privacy-preserving tracking in a smart room. IEEE Trans Ind Inform 2014, 10(1):689–696. 10.1109/TII.2013.2251892

    Article  Google Scholar 

  38. Li H, Chen X, Huang B, Tang D, Chen H: High bandwidth visible light communications based on a post-equalization circuit. Photonics Technol Lett IEEE 2014, 26(2):119–122. 10.1109/LPT.2013.2290026

    Article  Google Scholar 

  39. Khalid A, Cossu G, Corsini R, Choudhury P, Ciaramella E: 1-gb/s transmission over a phosphorescent white LED by using rate-adaptive discrete multitone modulation. Photonics J IEEE 2012, 4(5):1465–1473. 10.1109/JPHOT.2012.2210397

    Article  Google Scholar 

  40. Tsonev D, Chun H, Rajbhandari S, McKendry JJD, Videv S, Gu E, Haji M, Watson S, Kelly AE, Faulkner G, Dawson MD, Haas H, O’Brien D: A 3-gb/s single-LED OFDM-based wireless VLC link using a gallium nitride μ LED . Photonics Technol Lett IEEE 2014, 26(7):637–640. 10.1109/LPT.2013.2297621

    Article  Google Scholar 

  41. Cossu G, Khalid A, Choudhury P, Corsini R, Ciaramella E: 3.4 gbit/s visible optical wireless transmission based on RGB LED. Opt Express 2012, 20(26):501–506. 10.1364/OE.20.00B501

    Article  Google Scholar 

  42. Plackett RL: Some theorems in least squares. Biometrika 1950, 37(1/2):149–157. 10.2307/2332158

    Article  MATH  MathSciNet  Google Scholar 

  43. Wang Q, Zhang X, Wang M, Boyer KL: Learning room occupancy patterns from sparsely recovered light transport models. In 22nd International Conference on Pattern Recognition, Stockholm, Sweden; 2014.

    Book  Google Scholar 

  44. Feng X, Zhang Z: The rank of a random matrix. Appl Math Comput 2007, 185(1):689–694. 10.1016/j.amc.2006.07.076

    Article  MATH  MathSciNet  Google Scholar 

  45. Keesey UT: Flicker and pattern detection: a comparison of thresholds. J Opt Soc Am 1972, 62(3):446–448. 10.1364/JOSA.62.000446

    Article  Google Scholar 

  46. Roufs J: Dynamic properties of vision–i. experimental relationships between flicker and flash thresholds. Vis Res 1972, 12(2):261–278. 10.1016/0042-6989(72)90117-4

    Article  Google Scholar 

  47. Lawler EL, Lenstra JK, Kan AR, Shmoys DB: The Traveling Salesman Problem: a Guided Tour of Combinatorial Optimization, vol. 3. Wiley, New York; 1985.

    Google Scholar 

  48. Crowder H, Padberg MW: Solving large-scale symmetric travelling salesman problems to optimality. Manage Sci 1980, 26(5):495–509. 10.1287/mnsc.26.5.495

    Article  MATH  MathSciNet  Google Scholar 

  49. Mühlenbein H, Gorges-Schleuter M, Krämer O: Evolution algorithms in combinatorial optimization. Parallel Comput 1988, 7(1):65–85. 10.1016/0167-8191(88)90098-1

    Article  MATH  Google Scholar 

  50. Grötschel M, Holland O: Solution of large-scale symmetric travelling salesman problems. Math Program 1991, 51(1–3):141–202. 10.1007/BF01586932

    Article  MATH  Google Scholar 

  51. Braun H: On solving travelling salesman problems by genetic algorithms. In Parallel Problem Solving from Nature. Lecture Notes in Computer Science, vol. 496. Edited by: Schwefel H-P, Männer R. Springer, Berlin Heidelberg; 1991:129–133.

    Chapter  Google Scholar 

  52. Mitchell M: An Introduction to Genetic Algorithms. MIT press, Cambridge; 1998.

    MATH  Google Scholar 

  53. Radon J: On determination of functions by their integral values along certain multiplicities. Ber der Sachische Akademie der Wissenschaften Leipzig, (Germany) 1917, 69: 262–277.

    Google Scholar 

  54. Deans SR: The Radon Transform and Some of Its Applications. Courier Dover Publications, Mineola; 2007.

    MATH  Google Scholar 

  55. Shepp LA, Logan BF: The Fourier reconstruction of a head section. IEEE Trans Nuclear Sci 1974, 21(3):21–43. 10.1109/TNS.1974.6499235

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported primarily by the Engineering Research Centers Program (ERC) of the National Science Foundation under NSF Cooperative Agreement No. EEC-0812056 and in part by New York State under NYSTAR contract C090145.

The authors would like to thank Prof. Robert Karlicek for his constructive suggestions. The authors would also like to thank Dr. Zhenhua Huang, Mr. Lawrence Fan, Mr. Sina Afshari, Dr. Li Jia, Mr. Cyril Acholo, Mr. Anqing Liu, Prof. Richard J. Radke, Prof. Sandipan Mishra and Mr. Charles Goodwin for the helpful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quan Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

QW conceived the basic idea of this work, implemented the methods, carried out the experiments, and wrote the manuscript. XZ helped with the data collection, and proposed the genetic algorithm for perturbation ordering. KB supervised the entire project. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Q., Zhang, X. & Boyer, K.L. Occupancy distribution estimation for smart light delivery with perturbation-modulated light sensing. J Sol State Light 1, 17 (2014). https://doi.org/10.1186/s40539-014-0017-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40539-014-0017-2

Keywords