Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 16-19 and 21-30 are rejected under 35 U.S.C. 103 as being unpatentable over Toshikazu et al. (EP 0814344A2), Horak et al. (US PGPUB 20180359432) and in further view of Bourdis et al. (US PGPUB 20220245914) {priority for this EP 19305624.9 was published on 11/19/2020).
[Claim 16]
Toshikazu teaches a system for tracking performers on a show stage, including:
a light projector (Paragraph 186, fig. 40, lighting equipment 72 which floodlights the object to be tracked 90) light to track the movement of the performers on the stage,
at least one motorized yoke (support for holding cameras) having at least two axes of rotation with direct axial drive for the support and the omnidirectional movement of an assembly (Paragraph 180, a rotating table 71 provided with motors for controlling the two axes of a pan axis (in the horizontal direction) and a tilt axis (in the vertical direction) is provided, the CCD cameras 60 and 65 are placed on the rotating table 71 and the rotating table 71 is rotated in the panning and tilting directions by a control signal from the main control unit 80) comprising:
an image acquisition assembly formed of a color camera (color camera 65, fig. 40) and of an infrared camera (IR camera 60 is also, fig. 40),
an image processing module to receive, from the infrared camera, images of the performers performing on the stage in order to determine their coordinates (Paragraph 188, According to the present tracking apparatus, the main control unit 80 calculates the position of the object to be tracked 90 and outputs the displacement Δx, Δy from the center of the screen in Step 12 or Step 15. Then, based on this output, the main control unit 80 controls the rotating table 71 so that the object to be tracked 90 existing in the position of Δx, Δy displaced from the center of the screen is shifted into the center of the screen (Step 16). In this case, Δx and Δy are made to correspond respectively to the panning direction and the tilting direction, thereby executing feedback control with Δx and Δy served as a deviation), and
to receive, from the color camera (65), visible images of these performers allowing them to be identified if necessary (Paragraph 146, According to the present tracking apparatus, an infrared light transmitter 91 which serves as a marker is preparatorily attached to an object to be tracked 90 and outputs the position of the object to be tracked 90 by combining a process for extracting the infrared light outputted from the infrared light transmitter 91 from an image picked up by the CCD camera 60 for the detection of the position of the object to be tracked with a process for detecting the position of an object to be tracked from a color image picked up by the color CCD camera 65 based on the image characteristic of the object to be tracked) and
a display screen (Paragraph 88, The image input display section 54a outputs the pickup image inputted from the camera 53 to the display device 55 so as to display the image on the display screen) to display a visible image of each of the performers performing on the stage obtained by the color camera (figs. 29a and 29b) and to allow the assignment by an operator at a control terminal or a media server of the determined light or video projectors to identified performers on the stage, in order to ensure automatic tracking by the light or video projectors of the respective movements of these identified performers (Paragraph 189, Then, the main control unit 80 reads the value of the potentiometer corresponding to the angles of rotation in the panning and tilting directions from the rotating table 71 and obtains the posture of the rotating table 71, and in its turn, the postures of the attached CCD cameras 60 and 65 (Step 16). Subsequently, in Step 17, from the position of the CCD camera 60 or the CCD camera 65 and the posture of one of the CCD cameras 60 and 65, the main control unit calculates an intersection of the optical axis of the CCD camera and the floor surface or an intersection of the optical axis and a specified height and then executes a coordinate transformation process for calculating the direction of the lighting equipment from the intersection and the position of the lighting equipment 72. Further, subsequently, the main control unit 80 controls the rotating table 71 similarly to Step 16 of the fourteenth embodiment (Step 18). Then, the main control unit controls the lighting equipment 72 so that the lighting is directed toward the object to be tracked 90 (Step 19)).
Toshikazu fails to teach a dichroic mirror separating and directing the image collected through the common objective lens, on the one hand towards the color camera and on the other hand towards the neuromorphic camera, and the second camera is a neuromorphic camera and use of a plurality of light or video projectors to independently track a plurality of actors.
However Horak teaches images in visible light (hereinafter called “visible images” for conciseness), generally in colors, and images in the near-infrared, are acquired independently by means of two distinct matrix sensors. In order to reduce the bulk, these two sensors can be associated with a single image-forming optical system via a dichroic beam splitter, so as to form a bi-spectral camera.(Paragraph 4).
Therefore taking the combined teachings of Toshikazu and Horak, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have a dichroic mirror separating and directing the image collected through the common objective lens in order to reduce bulk.
Toshikazu in view of Horak fails to teach the second camera is a neuromorphic camera and use of a plurality of light or video projectors to independently track a plurality of actors.
However Bourdis teaches that the object 3 may be a person, other moving object, or plurality of the formers, whose position, posture and orientation are to be detected and tracked. The object 3 carries at least one marker 4. Typically, a plurality of markers is fixed on the surface of the object 3. The object 3 is positioned in the acquisition volume 1, so that the marker can be observed and sensed by the event-based light sensors 51, 52. Alternatively, the marker 4 can also be active, i.e. using a power source and emitting light, for example visible or near-infrared light, which may cause the event-based light sensor to generate events (Paragraph 49). Event based sensors are called neuromorphic cameras.
Therefore taking the combined teachings of Toshikazu, Horak and Bourdis, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have the second camera is a neuromorphic camera and use of a plurality of light or video projectors to independently track a plurality of actors in order to have high temporal resolution, they make it possible to use of a much greater variety of light signals, compared to conventional frame-based cameras.
[Claim 17]
Toshikazu teaches a motorized yoke but fails to teach an infrared diode module to illuminate the volume of the stage. However Bourdis teaches a retro-reflective reflector reflects external illumination light, e.g. from external infrared light sources. The reflected light causes the event-based light sensor (Paragraph 47) in order to have event-based light sensors have high temporal resolution, they make it possible to use of a much greater variety of light signals, compared to conventional frame-based cameras.
[Claim 18]
Toshikazu teaches infrared identifiers intended to be worn by each of the performers (Paragraph 146, According to the present tracking apparatus, an infrared light transmitter 91 which serves as a marker is preparatorily attached to an object to be tracked 90 and outputs the position of the object to be tracked 90 by combining a process for extracting the infrared light outputted from the infrared light transmitter 91 from an image picked up by the CCD camera 60 for the detection of the position of the object to be tracked with a process for detecting the position of an object to be tracked from a color image picked up by the CCD camera 65 based on the image characteristic of the object to be tracked 90, the image characteristic having been extracted and stored in initial setting).
[Claim 19]
Toshikazu, Horak and Bourdis fails to teach wherein the infrared identifiers are IR emitters sequenced at different frequencies comprised between 2 and 20 KHz. However Official Notice that it is common to have IR emitter sequenced at frequencies between 2 and 20 khz in order to make the identification process easier. Therefore taking the combined teachings of Toshikazu, Horak and Bourdis, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have IR emitter sequenced at frequencies between 2 and 20 khz in order to make the identification process easier.
[Claim 21]
Bourdis teaches an array projector to calibrate the color and neuromorphic cameras so as to define a tracking volume (Paragraph 58, During the setting of the event-based light sensors, it is also possible to calibrate them to estimate the parameters allowing to map 3D coordinates in the acquisition volume into 2D pixel coordinates, i.e. floating-point pixel addresses, in any of the event-based light sensor and Paragraph 59, For this purpose, as an example, a known pattern of markers, such as an asymmetric grid of blinking LEDs, is moved exhaustively across the acquisition volume and detected by each event-based light sensor. The event-based light sensors perceive the LEDs, recognize the blinking frequencies, and associate each 2D measurements to each element of the 3D structure) in order to estimate the posture and orientation of each light sensor.
[Claim 22]
Toshikazu teaches wherein the motorized yoke further includes a power supply and control module as well as a wired, radio or light data communication module (Paragraph 180, a rotating table 71 provided with motors for controlling the two axes of a pan axis (in the horizontal direction) and a tilt axis (in the vertical direction) is provided, the CCD cameras 60 and 65 are placed on the rotating table 71 and the rotating table 71 is rotated in the panning and tilting directions by a control signal. A power supply is inherently needed in order to rotate the table and a control signal will also be outputted by a controller of some kind).
[Claim 23]
This is a method claim corresponding to apparatus claim 16 and is analyzed and rejected based upon apparatus claim 16.
[Claim 24]
Bourdis teaches illuminating the volume of the stage by means of an infrared diode module integrated into the motorized yoke to create contrasts of light on each of the performers and facilitate their identification by the neuromorphic camera. (A retro-reflective reflector reflects external illumination light, e.g. from external infrared light sources. The reflected light causes the event-based light sensor, Paragraph 47) in order to have event-based light sensors to have high temporal resolution, they make it possible to use of a much greater variety of light signals, compared to conventional frame-based cameras.
[Claim 25]
Bourdis teaches including the timestamp of the coordinates and their transmission to stagery machinery (Paragraph 67, When the marker is present in the fields of view of the light sensors, the light sensor C1 generates event ev(i.sub.c1, t.sub.1) for a pixel having an address expressed as index i.sub.c1 at coordinates (x.sub.ic1, y.sub.ic1) in the pixel array of light sensor C1 at a time t.sub.1, the light sensor C2 generates event ev(i.sub.ic2, t2) for a pixel having an address expressed as index i.sub.ic2 at coordinates (x.sub.ic2, y.sub.ic2) in the pixel array of light sensor C2 at a time t.sub.2, . . . , and the light sensor Cn generates event ev(i.sub.cn, t.sub.n) for a pixel having an address expressed as index i.sub.cn at coordinates (x.sub.icn,y.sub.icn) in the pixel array of light sensor Cn at a time t.sub.n) in order to further improve the reliability of the detection and/or to match the detections across sensors.
[Claim 26]
Bourdis teaches a first prior step of calibrating the color and neuromorphic cameras by means of an array projector defining a tracking volume (Paragraph 58, During the setting of the event-based light sensors, it is also possible to calibrate them to estimate the parameters allowing to map 3D coordinates in the acquisition volume into 2D pixel coordinates, i.e. floating-point pixel addresses, in any of the event-based light sensor and Paragraph 59, For this purpose, as an example, a known pattern of markers, such as an asymmetric grid of blinking LEDs, is moved exhaustively across the acquisition volume and detected by each event-based light sensor. The event-based light sensors perceive the LEDs, recognize the blinking frequencies, and associate each 2D measurements to each element of the 3D structure) in order to estimate the posture and orientation of each light sensor.
[Claim 27]
Bourdis teaches wherein the identification of the performers is ensured by facial recognition from the images coming from the color camera (Paragraphs 1 and 4, Machine vision is a field that includes methods for acquiring, processing, analyzing and understanding images for use in wide type of applications such as for example security applications (e.g., surveillance, intrusion detection, object detection, facial recognition, etc.), environmental-use applications (e.g., lighting control), object detection and tracking applications, automatic inspection, process control, and robot guidance etc).
[Claim 28]
Toshikazu teaches wherein the identification of the performers is ensured by an infrared identifier worn by each of the performers and analyzed by the neuromorphic camera (Paragraph 146, According to the present tracking apparatus, an infrared light transmitter 91 which serves as a marker is preparatorily attached to an object to be tracked 90 and outputs the position of the object to be tracked 90 by combining a process for extracting the infrared light outputted from the infrared light transmitter 91 from an image picked up by the CCD camera 60 for the detection of the position of the object to be tracked with a process for detecting the position of an object to be tracked from a color image picked up by the CCD camera 65 based on the image characteristic of the object to be tracked 90, the image characteristic having been extracted and stored in initial setting).
[Claim 29]
Toshikazu, Horak and Bourdis fails to teach wherein the infrared identifiers are IR emitters sequenced at different frequencies comprised between 2 and 20 KHz. However Official Notice that it is common to have IR emitter sequenced at frequencies between 2 and 20 khz in order to make the identification process easier. Therefore taking the combined teachings of Toshikazu, Horak and Bourdis, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have IR emitter sequenced at frequencies between 2 and 20 khz in order to make the identification process easier.
[Claim 30]
Toshikazu teaches wherein the assignment of the light or video projectors to the different performers is carried out by the operator by means of a simple joystick or trackball of the control terminal (Paragraph 90, An input device 56 is a pointing device such as a mouse or joystick for moving a mouse cursor displayed on the display device 55. Designation of operation to the lighting controller is executed by locating the mouse cursor on an icon in the operating image section B or the display image S3 displayed in the pickup image displaying section A on the display screen and clicking the input device 56).
Claim(s) 20 is rejected under 35 U.S.C. 103 as being unpatentable over Toshikazu et al. (EP 0814344A2), Horak et al. (US PGPUB 20180359432), Bourdis et al. (US PGPUB 20220245914) and in further view of Brown (US PGPUB 20120099851).
[Claim 20]
Toshikazu, Horak and Bourdis fail to teach several motorized yokes and wherein to facilitate the tracking between several areas, a first motorized yoke is a master yoke, and the other motorized yokes are slave yokes. However Brown teaches in FIG. 12, balance pole 4 and master and slave gimbal yokes 30 and 31 have a hard-interconnect to one axis of rotation of master sled 34 with respect to slave sled 35. Tie rod 32 is attached to pivoting yokes 33a, 33b at the slave and master sled ends of pole 18, respectively. The hard connection between the master sled end and the slave sled end by virtue of tie-rod 32 and yokes 33a, 33b facilitates transmitting the pivot angle of master sled 34 to slave sled 35. Synchronization in the respective pan axes of master and slave sleds 34, 35 can be achieved by either sensor/motor means or by means of tie-rods and cranks (Paragraph 92). Therefore taking the combined teachings of Toshikazu, Horak, Bourdis and Brown, it would be obvious to one skilled in the art before the effective filing date of the invention to have been motivated to have motorized yokes and wherein to facilitate the tracking between several areas, a first motorized yoke is a master yoke, and the other motorized yokes are slave yokes in order to have proper synchronization between the master and slave cameras.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YOGESH K AGGARWAL whose telephone number is (571)272-7360. The examiner can normally be reached Monday - Friday 9:30-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 5712727564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YOGESH K AGGARWAL/ Primary Examiner, Art Unit 2637