DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. GB2309970.8, filed on 6/30/2023.
Specification
The title of the invention is not descriptive. The current titled only states “APPARATUS AND SYSTEM”, it is very generic and has no description about the invention. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 16-18, 21, 30, 34, 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Slobodyanyuk (US 20160282449 A1), in view of Chen et al. (US 20180190015 A1, hereinafter Chen).
Regarding Claim 16, Slobodyanyuk teaches an apparatus comprising at least one processor (Slobodyanyuk, Paragraph [0010], “The electronic device comprises a controller that is configured to decode a received light signal to extract a received identifier”; [0051], "the controller 418 includes a processor 542"), and at least one memory (Slobodyanyuk, Paragraph [0051), "processor 542 which is communicatively coupled to a memory 544 and a storage 546") storing instructions that, when executed by the at least one processor, cause the apparatus at least to: transmit at least one first light signal (Slobodyanyuk, Paragraph [0009], "transmitting, by a first LIDAR system, a first light signal modulated to include a first identifier associated with the first LIDAR system into an environment"), receive the at least one first light signal (Slobodyanyuk, Paragraph [0028], "Each LIDAR system 106(1)-106(3) operates, at least in part, by detecting reflections of its respective light signal 108(1)-108(3)"; Paragraph [0042], "the LIDAR system
206(2) may use the reflection of the light signal 208(2) for determining a
location and range"), receive at least one second light signal (Slobodyanyuk,
Paragraph [0012], "receiving a second light signal from the environment";
Paragraph [0041], "The LIDAR system 206(2) modulates the
light signal 208(2) to include a first ID associated with the LIDAR system
206(2) into the environment 212 (block 302). The light signal 208(2) has a
predetermined wavelength that is based on operation and/or design criteria
associated with the LIDAR systems 206(1 )-206(4). The LIDAR system 206(2)
receives a second light signal 208 (3)-R having the same wavelength from
the environment 212 "), decode the at least one second light signal to obtain
digital information encoded on the at least one second light signal (Slobodyanyuk, Paragraph [0042), "The LIDAR system 206(2) decodes the second light signal 208(3)-R and extracts the second ID"; Paragraph [0043), "The LIDAR system 206(2) extracts the service information from the second light signal 208(3)-R"; Paragraph [0050], "The controller 418 utilizes the signal information 42004)-420(6) from the baseband processor 540 and the signal information 420(1 )-420(4) from the photodetector 538 to decode the information 214(2) from the light signal 208(2)"), perform a detection and ranging operation based on receiving the at least one first light signal (Slobodyanyuk, Paragraph [0005], "LIDAR operates by transmitting a laser signal into an environment and detecting reflections of the laser signal.
Based on the known orientation of the laser signal at the time the laser signal was transmitted and other factors, a LIDAR system can determine relative locations and ranges of the surfaces from which the laser signal was reflected"; Paragraph [0028], "by detecting reflections of its respective light signal 108(1)-108(3) and determining a direction and range of a surface that caused the reflection"), display the three-dimensional model to a user of the apparatus (Slobodyanyuk, Paragraph [0054], "A display 568 may be capable of presenting information to an occupant of the vehicle 202"; Paragraph [0055], "provide the message for presentation to
the occupant of the vehicle 202 on the display 568").
But Slobodyanyuk does not explicitly disclose in response to the detection
and ranging operation, create a three-dimensional model, and in response to the
obtained digital information, decoded from the at least one second light signal,
update the three-dimensional model.
However, Chen teaches in response to the detection and ranging
operation, create a three-dimensional model (Chen, Paragraph [0019], "The
LiDAR device 10 transmits and receives laser lights 128, 148 to form a
reflective points data . The computer 11 controls the work of the LiDAR
device 10 , processes the reflective points data , and builds a 3D model
according to the reflective points data"; Paragraph [0022], "The ICP
calculating module 113 calculates the reflective points data by ICP method
and obtains a 3D point cloud . The 3D modeling module 114 builds a 3D
model according to the 3D point cloud"), in response to the obtained digital
information, decoded from the at least one second light signal, update the three-dimensional model (Chen, Paragraph [0037]-[0038], " step S18, obtaining a
new relative displacement by calculating the new reflective points data and
the ( N - 1 ) th reflective points data , obtaining an updated 3D point cloud
by adding the new reflective points data in the (N-1)th point cloud, and
go to step S19; and [ 0038] step S19, updating the 3D model according to
the updated 3D point cloud").
Chen and Slobodyanyuk are analogous since both of them are dealing with LiDAR-based detection and ranging systems that operate by transmitting and receiving light signals to obtain environmental information. Slobodyanyuk provided a way of using LiDAR systems to communicate encoded information between devices by transmitting light signals modulated with identifiers and service information, receiving second light signals from the environment, decoding those signals to extract the encoded digital information, and performing ranging operations. Chen provided a way of using LiDAR reflective points data to build and continuously update three-dimensional models of the environment through iterative processing of received data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the 3D model creation and updating techniques taught by Chen into the modified invention of Slobodyanyuk such that the received and decoded digital information from second light signals can be used to create and update three-dimensional models of the environment. The motivation is to enhance the situational awareness capabilities of the LiDAR system by combining the communication functionality with spatial modeling, allowing the system to not only receive encoded information from other sources but also visualize that information within a three-dimensional representation of the surrounding environment for improved user understanding and decision-making.
Regarding Claim 17, the combination of Slobodyanyuk and Chen teaches the invention in Claim 16.
The combination further teaches provide a user interface, wherein the user interface is configured to receive user selection input of one or more objects in the three-dimensional model (Slobodyanyuk, Paragraph [0039], " the vehicle electronics module may include a user interface that enables an occupant of a respective vehicle 202(1 )-202(4) to enter data into the vehicle electronics module for transmission via the LIDAR system 206(1 )-206(4) to other LIDAR systems 206(1 )-206(4) "; Paragraph [0054], "A display 568 may be capable of presenting information to an occupant of the vehicle 202").
Regarding Claim 18, the combination of Slobodyanyuk and Chen teaches the invention in Claim 16.
The combination further teaches update the three-dimensional model in response to at least one of: [[decoding the second light signal; a determination that the three-dimensional model should be updated, the determination being based on a comparison of the information provided by the three-dimensional model and the obtained digital information]] or a detection of a user input to the apparatus (Slobodyanyuk, Paragraph [0039], "the vehicle electronics module may include a user interface that enables an occupant of a respective vehicle 202(1 )-202(4) to enter data into the vehicle electronics module").
Slobodyanyuk does not explicitly disclose decoding the second light signal; a determination that the three-dimensional model should be updated, the determination being based on a comparison of the information provided by the three-dimensional model and the obtained digital information.
However, Chen teaches update the three-dimensional model in response
to at least one of: decoding the second light signal (Chen, Paragraph [0038], "step S19, updating the 3D model according to the updated 3D point cloud"), a determination that the three-dimensional model should be updated, the determination being based on a comparison of the information provided by the three-dimensional model and the obtained digital information (Chen, Paragraph [0031)-(0032], "step 5131, obtaining the Nth position using the Nth reflective points data"; "comparing the Nth position with the N-1th position"; Paragraph [0037], "obtaining a new relative displacement by calculating the new reflective points data and the N-1th reflective points data").
Chen and Slobodyanyuk are analogous since both of them are dealing with LiDAR-based detection and ranging systems that process received light signals to obtain spatial and environmental information. Slobodyanyuk provided a way of receiving and decoding second light signals from the environment to extract digital information encoded on those signals, and performing actions based on the received service information. Chen provided a way of continuously updating three-dimensional models by comparing newly received reflective points data with previously obtained data to determine when and how the model should be updated. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the conditional 3D model updating technique taught by Chen into the modified invention of Slobodyanyuk such that the three-dimensional model is updated in response to decoding the second light signal and based on comparison determinations between new and existing spatial information. The motivation is to provide intelligent and efficient 3D model management that updates the spatial representation only when meaningful changes are detected through comparison of the decoded information with existing model data, thereby reducing computational overhead while maintaining model accuracy.
Regarding Claim 21 , the combination of Slobodyanyuk and Chen teaches the invention in Claim 16.
The combination further teaches use the obtained digital information to control creation of a bidirectional communication channel between the apparatus and another apparatus (Slobodyanyuk, Paragraph [0007], "Mechanisms that enable LIDAR systems to distinguish among the signals transmitted by different LIDAR systems would enable accurate location and range determinations, and could also facilitate the communication of useful information among LIDAR systems"; Paragraph [0037], "Each light signal 208(1)-208(4) is modulated to include respective information 214(1)-214(4)"; Paragraph [0040], "A LIDAR system 206(1 )-206(4) that receives such service information may perform the action of communicating the service information to a downstream vehicle electronics module within the vehicle 202(1 )-202(4) for subsequent action").
Regarding Claim 30, it recites limitations similar in scope to the limitations of Claim 16 but as a method and the combination of Slobodyanyuk and Chen teaches all the limitations as of Claim 16. Therefore is rejected under the same rationale.
Regarding Claim 34, it recites limitations similar in scope to the limitations of claim 16 and the combination of Slobodyanyuk and Chen teaches all the limitations as of Claim 16. And Slobodyanyuk discloses these features can be implemented on a computer-readable storage medium (Slobodyanyuk, Paragraph [0068], "with the aspects disclosed herein may be implemented as electronic hardware, instructions stored in memory or in another computer readable medium and executed by a processor or other processing device").
Regarding Claim 35, it recites limitations similar in scope to the limitations of Claim 21 and therefore is rejected under the same rationale.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Slobodyanyuk (US 20160282449 A1), in view of Chen et al. (US 20180190015 A1, hereinafter Chen) as applied to Claim 16 above and further in view of Sung et al. (US 10693511 B1, hereinafter Sung).
Regarding Claim 19, the combination of Slobodyanyuk and Chen teaches the invention in Claim 16.
The combination further teaches receiving a plurality of second light signals with different directions of arrival (Slobodyanyuk, ,Paragraph [0057], "the LIDAR system 906(1) may have multiple transmit modules 416 and receiver modules 532 (not shown) mounted on the first vehicle 902(2) to transmit light signals 208(OUT) (not shown) in multiple directions and receive light signals 208(IN) (not shown) from multiple directions "; Paragraph [0058], "each of the LIDAR systems 906(1 )-906(3) periodically or continuously transmits light signals 908(1)-908(3) that contain information 914(1)-914(3), respectively "); decoding the plurality of second light signals to obtain respective digital information (Slobodyanyuk, Paragraph [0060], "The vehicle 902(1) receives the light signal 908(2), decodes the light signal 908(2) to extract the ID" and "extracts the service information from the light signal 908(2)." Similarly, "The vehicle 902(1) receives the light signal 908(3), decodes the light signal 908(3) to extract the ID" and "extracts the service information from the light signal 908(3)").
But the combination does not explicitly disclose based on a direction of arrival of the respective second light signals, classify the obtained digital information into different groups.
However, Sung teaches based on a direction of arrival of the respective second
light signals, classify the obtained digital information into different groups (Sung, FIG. 2, step 202; Column 3, lines 46-48; Abstract, "Control circuitry 110 determines a primary direction-of-arrival for the wireless user signal"; Column 3, lines 48-50, "The direction-of-arrival is determined based on the amount of energy and the phase of the received wireless signal at multiple antennas"; FIG. 2, step 207; Column 4, lines 27-29, "Control circuitry 111 reconfigures the digital filter in detector circuitry 112 for multiple directions-of-arrival"; FIG. 2, step 208; Column 4, lines 33-36, "In detector circuitry 112, the digital filter filters the received user signal based on multiple directions-of-arrival and recovers the user data"; Column 9, lines 28-38, "Digital filters 614 filter frequency-domain user data based on a single direction-of-arrival so energy from the primary direction is
processed and energy from other directions is suppressed ... Digital filters
614 also filter frequency-domain user data based on a multiple directions-of-
arrival so energy from the multiple directions is processed and energy from the remaining directions is suppressed"; Column 9, lines 40-44, "Digital filters 614 transfer filtered signals for each direction-of-arrival to MMSE 615, where the primary direction signal is denoted by a 'P' and the secondary direction signals are denoted by an 'S"'). {Sung, Column 9, lines 46-49, "When noise or uplink utilization are below their thresholds, then only the P signal is used. When noise or uplink utilization are above their thresholds, then the P signal and one or more S signals are used"; it is noted that the received signals are classified into different groups {a "P" group for the primary direction signal and "S" group{s) for secondary direction signals) based on the direction of arrival of each signal, wherein the digital information from each direction is processed as a separate
group.).
Sung and Slobodyanyuk are analogous since both of them are dealing with receiving and processing wireless signals from multiple sources and distinguishing signals based on their characteristics to mitigate interference and improve signal processing. Slobodyanyuk provided a way of receiving light signals from multiple LIDAR
systems, decoding the signals to extract identifiers and service information, and distinguishing the signals based on their unique identifiers. Sung provided a way of determining the direction of arrival of received wireless signals, and classifying the signals into different groups (primary direction signal group "P" and secondary direction signal groups "S") based on their respective directions of arrival for improved signal processing and noise handling. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the technique of classifying received signals into different groups based on direction of arrival taught by Sung into the modified invention of Slobodyanyuk such that the LIDAR system not only identifies signals by their identifiers but also classifies the
obtained digital information into different groups based on the direction of arrival
of the respective light signals. The motivation is to effectively and efficiently use direction-of-arrival filtering to handle excessive radio noise and interference from multiple signal sources discussed by Sung in Column 4, lines 13-15; and to enable the LIDAR system to distinguish and separately process signals from multiple LIDAR sources based on their spatial origin, thereby improving signal organization, reducing
interference, and enhancing data processing capabilities as discussed by Sung
in Column 1, lines 42-49 and Column 3, lines 27-66.
Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Slobodyanyuk (US 20160282449 A1), in view of Chen et al. (US 20180190015 A1, hereinafter Chen), further in view of Sung et al. (US 10693511 B1) as applied to Claim 19 above and further in view of Hosseini et al. (US 20180306925 A1, hereinafter Hosseini).
Regarding Claim 20, the combination of Slobodyanyuk and Chen and Sung teaches the invention in Claim 19.
The combination further teaches [[use at least one of time division multiplexing]] to classify the obtained digital information into the different groups (Sung, Column 2, lines 19-22, "Data link 114 uses Time Division Multiplex (TOM), Data Over Cable System Interface Specification (OOCSIS), Wave Division Multiplexing (WDM), Ethernet, IP, WIFI, 5GNR, L TE and/or the like"; it is noted the use of time division multiplexing in the wireless communication system for processing and transmitting signals).
The combination does not explicitly disclose but Hosseini teaches [[frequency division multiplexing]] to classify the obtained digital information into the different groups (Hosseini, Abstract; Title, "wavelength division multiplexed LiDAR systems, methods, and structures"), wherein wavelength division multiplexing is a form of frequency division multiplexing; Paragraph [0005], "systems, methods, and structures according to the present disclosure employ a multiwavelength beam comprising the output of a plurality of individual lasers. The individual wavelengths comprising the multiwavelength beam are separated out into individual beams"; Paragraph [0042], "The grating splits the beam into separate beams in the grating dimension"; [0044], "the bulk grating shown and described previously is advantageously replaced with a ID optical phased array to split the multiwavelength input signal into component wavelengths" [0047], "The output of each lasers is directed into a wavelength (de)multiplexer wherein they are combined into a single multiwavelength signal"; [0054], "the phased array emits each wavelength in one direction in space while also 'staring' in the same direction for that specific wavelength. As a result-when several lasers are employed-the phased array is illuminating and observing several points in space simultaneously and any backscattered, received light is identified from its wavelength"; [0055], "the backscattered light traverses through the same (de)multiplexer circuits that combined the laser light(s) prior to emission; different wavelength components are separated; and each component is directed toward the laser from which it was originally emitted"; it is noted that frequency division multiplexing (via different wavelengths/frequencies) is used to separate, classify, and identify signals in a LIDAR system based on their wavelength / frequency channel).
Hosseini and Slobodyanyuk are analogous since all of them are dealing with separating and classifying signals from multiple sources in wireless sensing and communication systems. Slobodyanyuk provided a way of receiving light signals from multiple LIDAR systems and classifying the obtained digital information into different groups based on direction of arrival. Hosseini provided a way of using frequency division multiplexing (wavelength division multiplexing) to separate, classify, and identify signals in LIDAR systems based on their wavelength/frequency. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the use of frequency division multiplexing taught by Hosseini into the modified invention of Slobodyanyuk such that the LIDAR system uses at least one of time division multiplexing or frequency division multiplexing to classify the obtained digital information into different groups. The motivation is to provide additional mechanisms for signal classification and separation beyond spatial classification (direction of arrival), thereby enabling more robust discrimination between signals from multiple LIDAR sources and reducing signal interference as discussed by Sung in Column 2, lines 19-22; and to apply frequency division multiplexing techniques specifically suited for LIDAR systems to separate and identify received signals based on their wavelength/frequency.
Claim(s) 22-25, 31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Slobodyanyuk (US 20160282449 A1), in view of Chen et al. (US 20180190015 A1, hereinafter Chen), further in view of Sung et al. (US 10693511 B1) as applied to Claim 16 above and further in view of Cincotta et al. (US 20210325505 A1, hereinafter Cincotta).
Regarding Claim 22, the combination of Slobodyanyuk and Chen teaches the
invention in Claim 16.
The combination further teaches Updating and augment the three-dimensional
model (Chen, FIG. 6, Paragraph [0037]-[0038], "obtaining an updated 3D point
cloud by adding the new reflective points data in the (N-1)th point cloud"
"updating the 3D model according to the updated 30 point cloud");
Chen and Slobodyanyuk are analogous since both of them are dealing with
LiDAR-based detection and ranging systems that operate by transmitting and receiving
light signals to obtain environmental information and maintain a spatial representation of
the environment. Slobodyanyuk provided a way of operating LiDAR systems to transmit
and receive light signals, detect reflections, and determine direction and range information for surfaces in the environment. Chen provided a way of continuously
updating a three-dimensional model using newly obtained reflective points data,
including obtaining an updated 3D point cloud by adding new reflective points data into
a prior point cloud, and updating the 3D model according to the updated 3D point cloud.
Therefore, it would have been obvious to one of ordinary skill in the art before the
effective filing date of the claimed invention was made to incorporate the 3D point cloud
updating and 3D model updating techniques taught by Chen into the modified invention
of Slobodyanyuk such that Slobodyanyuk's LiDAR-based system can update and
augment its three-dimensional model as additional reflective points data is obtained
from received light signals
The combination does not explicitly disclose but Cincotta teaches determine that
a direction of arrival of the second light signal corresponds to a bearing of a first object
in the three-dimensional model (Cincotta, Paragraph [0036], "The AOA <read on
direction of arrival> of the light from the modulated light source 6 can then be
determined based on the said pre-determined distance, and the displacement of
the light spot 17 from the centre of the quadrant PD 19") based upon the determine that a direction of arrival of the second light signal corresponds to a bearing of a first object (Cincotta, Paragraph [0011], "a non-imaging receiver for estimating an angle of arrival (AOA) <read on direction of arrival> of light from each said modulated light source") comprising of associating the digital information with the first object in the three-dimensional model, augment the three-dimensional model (Cincotta, Paragraph [0044], " This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA , YA"; Abstract "said AOA information and reference point positional information from the non-imaging
receiver is matched to the image captured by the imaging receiver to obtain said
spatial position information”).
Cincotta and Slobodyanyuk are analogous since both deal with processing
received light signals to obtain spatially relevant information. Slobodyanyuk provides a system for receiving and decoding light signals and generating a three-dimensional model, while Cincotta provides a way to determine the angle of arrival (AOA) of a received light signal. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the AOA determination taught by Cincotta into the modified invention of Slobodyanyuk such that the direction of arrival of the second light signal corresponds to a bearing of a first object in the three-dimensional model, and decoded digital information is associated with the first object in the three-dimensional model.
Regarding Claim 23, the combination of Slobodyanyuk, Chen and Cincotta teaches the invention in Claim 22.
The combination further teaches determine that a range of the second light signal corresponds to a position of the first object in the three-dimensional model (Slobodyanyuk, Paragraph (0028], "by detecting reflections of its respective light signal 108(1)-108(3) and determining a direction and range of a surface that caused the reflection").
Slobodyanyuk does not explicitly disclose but Cincotta teaches determine that a range of the second light signal corresponds to a position of the first object in the three-dimensional model (Cincotta, Paragraph [0044], "This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA, yA. These values are then used to calculate the position of the receiver arrangement."). based upon the determination that the range of the second light signal corresponds to the position of the first object and the determination that the direction of arrival of the second light signal corresponds to the bearing of the first object, (Cincotta, Paragraph [0011], "a non-imaging receiver for estimating an angle of arrival (AOA) <read on direction of arrival> of light from each said modulated light source") comprising of associating the digital information with the first object in the three-dimensional model, augment the three-dimensional model (Cincotta, Paragraph [0044], " This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA , YA"; Abstract "said AOA information and reference point positional information from the non-imaging receiver is matched to the image captured by the imaging receiver to obtain said spatial position information").
As explained in rejection of Claim 22 above, the obviousness for combining the distance determination of Cincotta into Slobodyanyuk is provided above.
Regarding Claim 24, the combination of Slobodyanyuk, Chen and Cincotta teaches the invention in Claim 22.
The combination further teaches determine that a bearing of a second object in the three-dimensional model corresponds to the bearing of the first object (Cincotta, Abstract, "a plurality of luminaires 5, at least one of the luminaires including at least one associated modulated light source for transmitting a light signal providing positional information of one or more reference points associated with the luminaire"; Paragraph, [0009], "visible light positioning receiver arrangement for obtaining spatial position information of the receiver arrangement from a plurality of luminaires, at least one of the luminaires including at least one associated modulated light source for transmitting a light signal providing positional information of one or more reference points associated with that luminaire"; Paragraph [0041], "The modulated light sources 6 have associated reference points 8 which in some instances can be used to assist the positioning process. These reference points can be identified within an image of the luminaires"; Paragraph [0043], " FIG. 3 (a) to (j) is a diagram showing a number of possible luminaire 5 and modulated light source 6 configurations for providing positional information of a reference point 8 associated with a luminaire 5"; FIG. 3(c), showing a blue LED modulated light source 6 located at a corner of the luminaire 5 providing positional information of multiple reference points 8 shown as a series of red marks on the frame of the luminaire S; Paragraph, "FIG. 4 shows an image captured by a mobile phone camera acting as the imaging receiver upon which is overlaid positional information obtained from the QADA receiver 7. The captured image shows four separate batten luminaires 5 respectively marked A to D, with a LED at the centroid of each luminaire being used as the modulated light source 6 ... the LED modulated light source 6 in luminaire A ... This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA, yA"; it is noted that luminaire A has both a modulated light source 6 at its centroid and a reference point 8 at Its corner, and the system processes both as distinct objects with positional information, thereby teaching that a second object (reference point 8) has a bearing that corresponds to the bearing of the first object (modulated light source 6) since both are associated with the same luminaire A and their bearings from the receiver are substantially the same); based upon the determination that the bearing of the second object corresponds to the bearing of the first object, comprising of associating the digital information with the second object in the three-dimensional model, augment the three-dimensional model (Cincotta, Paragraph [0044], "This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA, yA. These values are then used to calculate the position of the receiver arrangement"; Abstract, "said AOA information and reference point positional information from the non-imaging receiver is matched to the image captured by the imaging receiver to obtain said spatial position information"; it is noted that the system associates the decoded digital information-reference point positional information-with the second object (reference point 8) in the captured image, thereby augmenting the spatial position model).
Cincotta and Slobodyanyuk are analogous since both deal with light-based positioning and detection systems that process multiple objects or reference points within a scene to create spatial models with associated positional information. Slobodyanyuk provided a way of using LIDAR systems to decode digital information from received light signa ls and associate that information with objects in a three-dimensional model of the environment. Cincotta provided a way of determining direction of arrival (bearing) and position information for multiple reference points associated with luminaires, where multiple reference points can be associated with the same luminaire and processed as distinct objects within the positioning system, and matching that information to objects captured in an image to obtain spatial position information. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the technique taught by Cincotta of determining that a second object's bearing corresponds to a first object's bearing and associating decoded digital information with both the first and second objects into the modified invention of Slobodyanyuk such that when multiple objects (e.g., multiple reference points or features) share the same or substantially similar bearing in the three-dimensional model, the decoded digital information from the received light signal is associated with both the first object and the second object to augment the three-dimensional model.
Regarding Claim 25, the combination of Slobodyanyuk, Chen and Cincotta teaches the invention in Claim 24.
The combination further teaches determine that a position of the second object in the three-dimensional model corresponds to the position of the first object in the three-dimensional model (Cincotta, Paragraph [0017], "the reference point may be the location of the modulated light source. Alternatively, or in addition, the reference point may be the location of a physical feature of the luminaire, for example, a corner or centroid of the luminaire. Alternatively, or in addition, the reference point may be one or more visible marks provided on or adjacent to the luminaire"; it is noted that multiple reference points can be positioned at or near the same location, such as when a modulated light source is co-located with a physical feature of the luminaire; Paragraph [0043], "FIG. 3(d) shows an IR modulated light source 6 located at a corner of the luminaire 5, and providing positional information of the IR modulated light source 6 as the reference point 8";it is noted that the modulated light source 6 and the reference point 8 are at the same position- the corner of the luminaire; Paragraph [0044], "the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA, yA" [it is noted that when multiple objects are associated with the same luminaire at similar or overlapping positions, their positions in the three-dimensional model correspond to each other]); based upon the determination that the position of the second object corresponds to the position of the first object and the determination that the bearing of the second object corresponds to the bearing of the first object, comprising of associating the digit al information with the second object in the three-dimensional model, augment the three-dimensional model (Cincotta, Paragraph [0044], "This information is then matched with reference point position information decoded by the QADA receiver 7, the reference point being shown as a dot 8 at the corner of luminaire A and positioned at xA, yA. These values are then used to calculate the position of the receiver arrangement"; Paragraph, "The angle of arrival information obtained from the QADA can be used to identify the luminaires in the image captured by the imaging receiver. Triangulation of the locational information then allows the spatial position of the receiver arrangement 3 to be accurately determined"; it is noted that when both the bearing (AOA) and position information match between two objects, the system associates the decoded digital information with both objects to augment the three-dimensional model).
As explained in rejection of Claim 24 above, the obviousness for combining Cincotta with Slobodyanyuk is provided above.
Regarding Claim 31, it recites limitations similar in scope to the limitations of Claim 22 and therefore is rejected under the same rationale.
Claim(s) 26, 28, 29, 32, 33 is/are rejected under 35 U.S.C. § 103 as being unpatentable over Slobodyanyuk (US 2016/0282449 A1), in view of Chen et al. (US 2018/0190015 A1, hereinafter Chen) and further in view of Cincotta et al. (US 20210325505 A1, hereinafter Cincotta) as applied to Claim 22, 31 above respectively and further in view of Kim et al. (US 20210304508 A1, hereinafter Kim).
Regarding Claim 26, the combination of Slobodyanyuk, Chen and Cincotta teaches the invention in Claim 22.
The combination does not explicitly disclose but Kim teaches selectively adapt at least one of the first object or the second object (Kim, Paragraph [0039), "The erase area selection unit 120 receives an erasing area from a user. Here, the erasing area is an area corresponding to an object in 3D augmented reality, and the erasing area is selected by the user.").
Kim and Slobodyanyuk are analogous since both of them are dealing with modifying representations of objects based on received information in augmented reality or spatial representations. Slobodyanyuk provided a way of receiving and decoding light signals and creating/updating a three-dimensional model of an environment based on LiDAR detection and ranging. Kim provided a way of selecting an area corresponding to an object in 3D augmented reality and modifying the representation by erasing (removing/obscuring) the selected area. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the selective object adaptation technique taught by Kim into the modified invention of Slobodyanyuk such that at least one of the first object or the second object in the three-dimensional model can be selectively adapted.
Regarding Claim 28, the combination of Slobodyanyuk, Chen, Cincotta and Kim teaches the invention in Claim 26.
The combination further teaches removing at least a portion of at least one of the first object or the second object from the three-dimensional model (Kim, Paragraph [0040], "The hole area mask generator unit 130 defines the erasing area input from a user as a hole area and generates a mask corresponding to the hole area. That is, the hole area mask generation unit 130 generates a mask corresponding to the area to be erased, and generates a hole in 3D augmented reality.").
As explained in rejection of Claim 26 above, the obviousness for combining Kim with Slobodyanyuk is provided above.
Regarding Claim 29, the combination of Slobodyanyuk and Chen and Cincotta and Kim teaches the invention in Claim 26.
The combination further teaches obscuring at least a portion of at least one of the first object or the second object in the three-dimensional model (Kim, Paragraph [0040], "The hole area mask generator unit 130 defines the erasing area input from a user as a hole area and generates a mask corresponding to the hole area.").
As explained in rejection of Claim 26 above, the obviousness for combining Kim with Slobodyanyuk is provided above.
Regarding Claim 32, it recites limitations similar in scope to the limitations of Claim 26 and therefore is rejected under the same rationale.
Regarding Claim 33, the combination of Slobodyanyuk, Chen, Cincotta and Kim teaches the invention in Claim 32.
The combination further teaches wherein selectively adapting at least one of the first object or the second object comprises at least one of: removing at least a portion of at least one of the first object or the second object from the three-dimensional model (Kim, Paragraph [0040], "The hole area mask generator unit 130 defines the erasing area input from a user as a hole area and generates a mask corresponding to the hole area. That is, the hole area mask generation unit 130 generates a mask corresponding to the area to be erased, and generates a hole in 3D augmented reality.") obscuring at least a portion of at least one of the first object or the second object in the three-dimensional model (Kim, Paragraph [0040], "The hole area mask generator unit 130 defines the erasing area input from a user as a hole area and generates a mask corresponding to the hole area.").
As explained in rejection of Claim 32 above, the obviousness for combining Kim with Slobodyanyuk is provided above.
Allowable Subject Matter
Claim(s) 27 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is an examiner’s statement of reasons for allowance:
Regarding Claim 27, the prior art of record, specifically the prior art Slobodyanyuk (US 20160282449 Al) teaches a LIDAR system that receives and decodes digital information from second light signals, creates and updates a three-dimensional model based on detection and ranging operations, and determines direction of arrival corresponding to bearings of objects in the three-dimensional model; The prior art Chen et al. (US 20180190015 Al) teaches creating and updating a three-dimensional model based on reflective points data from a LiDAR device, including obtaining updated 3D point clouds and updating the 3D model according to the updated point cloud. Additional prior art Cincotta et al. (US 20210325505 Al) teaches determining that a direction of arrival of a light signal corresponds to a bearing of objects (modulated light sources and reference points) in a positioning system, and associating decoded positional information with those objects to augment spatial position determination. Additional prior art Kim et al. (US 20210304508 Al) teaches selectively adapting objects in 3D augmented reality by receiving an erasing area from a user corresponding to an object, defining the erasing area as a hole area, generating a mask corresponding to the hole area, and erasing/removing/ obscuring portions of objects from the three-dimensional model. However, none of the prior art cited alone or in combination provides motivation to teach or suggest determining that an alternative three-dimensional representation of at least one of the first object or the second object is available for download, downloading the alternative three-dimensional representation of the at least one of the first object or the second object, and augmenting the three-dimensional model by placing the alternative three-dimensional representation of the at least one of the first object or the second object in the three-dimensional model as recited in Claim 27.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20180096528 A1 Automatic placement of augmented reality models
US 20170219695 A1 Multiple Pulse, LIDAR Based 3-D Imaging
US 20170201321 A1 Adaptive multiple input multiple output (mimo) optical orthogonal frequency division multiplexing (o-ofdm) based visible light communication
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614