DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is in response to amendments and remarks filed on 12/15/2025. Claims 1-18 are pending.
Claim Objections
Claim 2 is objected to because of the following informalities: In line 6, the word “senses” should be “sensed”. Appropriate correction is required.
Claims 6 and 15 are objected to because of the following informalities: the amended claim language in the last limitation, “the visible light information including spectral information of visible light the mobile unit is exposed to, the spectral information being obtained at a limited number of predetermined wavelengths” appears to be repetitive/redundant with the language in the first limitation. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-4, 10, 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over NPL document: Zhang, Chi, and Xinyu Zhang. “LiTell: Robust Indoor Localization Using Unmodified Light Fixtures.” Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. New York, NY, USA: ACM, 2016. 230–242. Web, herein referred to as ("Zhang") in view of U.S. Patent Publication No. 2018/0219623 ("Bitra").
Regarding claim 1, Zhang discloses a method of detecting a location of a mobile unit (smartphone, see section 6), comprising:
using a light sensor (CMOS image sensors used in smartphones, see overall sections 3-4, and “Receiver (smartphone) side implementation” in section 6), obtaining a plurality of pieces of spectral information (sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera” see also section 5: “obtaining an aliased copy of the CF”) of visible light (section 2.1: “A fluorescent light (FL) produces visible light...”) the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”), the mobile unit being in an environment lit by a plurality of light sources (section 5.1 describes exposing the user/smartphone to multiple lights consecutively, and section 7.2.1 states: “In particular, FLs in the grocery store are closely placed in lines 2 meters away from the phone, resulting in multiple lights being captured simultaneously.”);
determining a signature of a light source (see section 5: “smartphone-extracted CF feature” and see overall section 4.3) independent from a distance or an angle between the light source and the sensor (see section 7.1, signature is determined accurately even at a distance, within limit, and from different axial and lateral directions, therefore, it is not dependent on distance/angle between the lights and the sensor, see also section 7.2.1: “the CF feature is deterministic even when [received signal strength] RSS varies significantly”);
comparing the determined signature (“smartphone-extracted CF feature”) to previously stored signatures of the light sources (see section 5: “match the smartphone-extracted CF feature with CF fingerprints in the database.” And see section “CF fingerprinting” in section 6) and identifying a light source having a signature that has a minimum error relative to the determined signature in the stored signatures (see section 5: “we find the fingerprint with minimum difference in CF. Finally, the fg with minimum matching distance is considered as the FL’s CF.”); and
estimating a current location of the mobile unit as one of being at a known installation location of the identified light source or being proximate to the known installation location of the identified light source (section Empirical validation: “We emphasize that LiTell can distinguish which light in the pair the user is currently at as long as CFs for the 2FLs in the pair are different”).
Zhang does not disclose that the spectral information is obtained at a limited number of predetermined wavelengths.
However, Bitra discloses obtaining information from a limited number of predetermined wavelengths (paragraph [0026], information encoded in a specific color, while other colors do not have encoded information therein).
It would have been obvious to one of ordinary skill in the art before the effective filing date to obtaining information from a limited number of predetermined wavelengths as disclosed by Bitra in the device of Zhang in order to help improve the time it takes to determine the signature and matching process.
Regarding claim 10, Zhang discloses a mobile unit comprising a light sensor, the mobile unit configured to:
obtain a plurality of pieces of spectral information (sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera” see also section 5: “obtaining an aliased copy of the CF”) of visible light (section 2.1: “A fluorescent light (FL) produces visible light...”) the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”), the mobile unit being in an environment lit by a plurality of light sources (section 5.1 describes exposing the user/smartphone to multiple lights consecutively, and section 7.2.1 states: “In particular, FLs in the grocery store are closely placed in lines 2 meters away from the phone, resulting in multiple lights being captured simultaneously.”);
determining a signature of a light source (see section 5: “smartphone-extracted CF feature” and see overall section 4.3) independent from a distance or an angle between the light source and the sensor (see section 7.1, signature is determined accurately even at a distance, within limit, and from different axial and lateral directions, therefore, it is not dependent on distance/angle between the lights and the sensor, see also section 7.2.1: “the CF feature is deterministic even when [received signal strength] RSS varies significantly”);
comparing the determined signature (“smartphone-extracted CF feature”) to previously stored signatures of the light sources (see section 5: “match the smartphone-extracted CF feature with CF fingerprints in the database.” And see section “CF fingerprinting” in section 6) and identifying a light source having a signature that has a minimum error relative to the determined signature in the stored signatures (see section 5: “we find the fingerprint with minimum difference in CF. Finally, the fg with minimum matching distance is considered as the FL’s CF.”); and
estimating a current location of the mobile unit as one of being at a known installation location of the identified light source or being proximate to the known installation location of the identified light source (section Empirical validation: “We emphasize that LiTell can distinguish which light in the pair the user is currently at as long as CFs for the 2FLs in the pair are different”).
Zhang does not disclose that the spectral information is obtained at a limited number of predetermined wavelengths.
However, Bitra discloses obtaining information from a limited number of predetermined wavelengths (paragraph [0026], information encoded in a specific color, while other colors do not have encoded information therein).
It would have been obvious to one of ordinary skill in the art before the effective filing date to obtaining information from a limited number of predetermined wavelengths as disclosed by Bitra in the device of Zhang in order to help improve the time it takes to determine the signature and matching process.
Regarding claims 3 and 12, Zhang in of Bitra discloses the method of claim 1 and mobile unit of claim 10, and Bitra further discloses that the spectral information obtained at a limited number of predetermined wavelengths is obtained only at a red, a green and a blue wavelength or at red, green and blue wavelength bands (paragraph [0026]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to obtaining information from a limited number of predetermined wavelengths as disclosed by Bitra in the device of Zhang in order to help improve the signal to noise ratio and make the light sensor more sensitive.
Regarding claims 4 and 13, Zhang in of Bitra discloses the method of claim 1 and mobile unit of claim 10, and Zhang further discloses that the relationship between the plurality of the obtained pieces of spectral information is a difference between or a ratio of two spectral powers obtained at two different of the wavelengths (for example see sections 3-4, noisy outliers removed, and spurious peaks have been removed).
Claims 2, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Bitra further in view of U.S. Patent Publication No. 2018/0176739 ("Zhang ‘739").
Regarding claims 2 and 11, Zhang in of Bitra discloses the method of claim 1 and mobile unit of claim 10, and Zhang further discloses that the light sensor includes a sensing element that is directed in a vertical upward direction (section 5.2: “user holds the smartphone roughly at level position in parallel to the ceiling fixture.”) and, the mobile unit being configured to reduce a size of the estimated current location based on the [sensed] light emitted by a light source adjacent to the identified light source (section 7.2.1: “When the location matching confidence is low, LiTell advises the user to sample a single neighboring light to get the correct result.”).
Zhang in view of Bitra does not disclose that at least one further sensing element that is oriented in a direction at an acute angle relative to the vertical upward direction and wherein the at least one further sensing element senses light emitted by a light source adjacent to the identified light source.
However, Zhang ‘739 discloses that the light sensor (24, Fig. 1) comprises a sensing element (36a or 36b, Fig. 1) that is directed in a vertical upward direction (Fig. 1, paragraph [0054]) and at least one further sensing element (other one of 36a or 36b, Fig. 1) that is oriented in a direction at an acute angle relative to the vertical upward direction (Fig. 1, paragraphs [0055], [0062]) and wherein the at least one further sensing element (other one of 36a or 36b, Fig. 1) senses light emitted by a light source adjacent to the identified light source (20a or 20b, Fig. 4, paragraphs [0064], [0067], [0070]), the mobile unit (10, Fig. 1) configured to reduce a size of the estimated current location based on the senses light emitted by a light source adjacent to the identified light source (paragraph [0071], [0075]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to include an additional sensing element as disclosed by Zhang ‘739 in the device of Zhang in view of Bitra in order to resolve ambiguity when an extracted signature is close to multiple light sources.
Claims 5-7, 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Bitra further in view of U.S. Patent Publication No. 2016/0197675 ("Ganick").
Regarding claims 5 and 14, Zhang in of Bitra discloses the method of claim 1 and mobile unit of claim 10, but does not disclose that the mobile unit has access to a trained machine learning model configured to output location information within an area illuminated by the identified light source, wherein the estimating the current location includes using the trained machine learning model to generate location information of the location of the mobile unit within the area illuminated by the identified light source.
However, Ganick discloses using a trained machine learning model (paragraphs [0086], [0090]) to generate position of each light source (paragraph [0086]), which in turn can be used to locate the mobile unit (paragraphs [0086],[0090]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use a trained machine learning model as disclosed by Ganick in the device of Zhang in view of Bitra in order to allow the location of a target to be determined with a high degree of accuracy.
Regarding claim 6, Zhang discloses a method of detecting a location of a mobile unit, comprising:
using a light sensor (CMOS image sensors used in smartphones, see overall sections 3-4, and “Receiver (smartphone) side implementation” in section 6), obtaining a plurality of pieces of spectral information (sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera” see also section 5: “obtaining an aliased copy of the CF”) of visible light (section 2.1: “A fluorescent light (FL) produces visible light...”) the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”), the mobile unit being in an environment lit by a plurality of light sources (section 5.1 describes exposing the user/smartphone to multiple lights consecutively, and section 7.2.1 states: “In particular, FLs in the grocery store are closely placed in lines 2 meters away from the phone, resulting in multiple lights being captured simultaneously.”); and
generating a location estimate by inputting, as visible light information (section 2.1: “A fluorescent light (FL) produces visible light...”), the obtained plurality of pieces spectral information into a model (section 4.3: “run FFT over the vector of samples”), the visible light information including spectral information of visible light the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”).
Zhang does not disclose that the spectral information is obtained at a limited number of predetermined wavelengths, nor that the it is input into a trained machine learning model.
However, Bitra discloses obtaining information from a limited number of predetermined wavelengths (paragraph [0026], information encoded in a specific color, while other colors do not have encoded information therein).
It would have been obvious to one of ordinary skill in the art before the effective filing date to obtaining information from a limited number of predetermined wavelengths as disclosed by Bitra in the device of Zhang in order to help improve the time it takes to determine the signature and matching process.
Zhang in view of Bitra does not disclose that the spectral information is input into a trained machined learning model.
However, Ganick discloses generating a location estimate (paragraph [0086]) by inputting light information (paragraph [0086]) into a trained machine learning model (paragraph [0086]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use light information and input them into a trained machine learning model as disclosed by Ganick in the device of Zhang in view of Bitra in order to calibrate the light based positioning system with more accuracy.
Regarding claims 7 and 16, Zhang in view of Bitra and Ganick discloses the method of claim 6 and the mobile unit of claim 15, and Ganick further discloses that the machine learning model is further trained to base location estimates on radiofrequency information (Wi-Fi, paragraph [0086]) in addition to the visible light information (light codes, paragraph [0086]) and wherein generating the location estimate further comprises inputting radiofrequency information sensed at a current location of the mobile unit by an RF sensor (additional sensor can include RF-type sensors such as Wi-Fi, paragraph [0086]) of the mobile unit into the machine learning model (paragraph [0086]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to include RF information into the machine learning model as disclosed by Ganick in the device of Zhang in view of Bitra and Ganick in order to calibrate the light based positioning system with more accuracy.
Regarding claim 15, Zhang discloses a mobile unit comprising a light sensor and configured to:
obtain a plurality of pieces of spectral information (sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera” see also section 5: “obtaining an aliased copy of the CF”) of visible light (section 2.1: “A fluorescent light (FL) produces visible light...”) the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”), the mobile unit being in an environment lit by a plurality of light sources (section 5.1 describes exposing the user/smartphone to multiple lights consecutively, and section 7.2.1 states: “In particular, FLs in the grocery store are closely placed in lines 2 meters away from the phone, resulting in multiple lights being captured simultaneously.”);
generate a location estimate by inputting, as visible light information (section 2.1: “A fluorescent light (FL) produces visible light...”), the obtained plurality of pieces spectral information into a model (section 4.3: “run FFT over the vector of samples”), the visible light information including spectral information of visible light the mobile unit is exposed to (“see sections 3-4, “sample the FL’s high-frequency characteristic signals using a camera”).
Zhang does not disclose that the spectral information is obtained at a limited number of predetermined wavelengths, nor that the it is input into a trained machine learning model.
However, Bitra discloses obtaining information from a limited number of predetermined wavelengths (paragraph [0026], information encoded in a specific color, while other colors do not have encoded information therein).
It would have been obvious to one of ordinary skill in the art before the effective filing date to obtaining information from a limited number of predetermined wavelengths as disclosed by Bitra in the device of Zhang in order to help improve the time it takes to determine the signature and matching process.
Zhang in view of Bitra does not disclose that the spectral information is input into a trained machined learning model.
However, Ganick discloses generating a location estimate (paragraph [0086]) by inputting light information (paragraph [0086]) into a trained machine learning model (paragraph [0086]).
It would have been obvious to one of ordinary skill in the art before the effective filing date to use light information and input them into a trained machine learning model as disclosed by Ganick in the device of Zhang in view of Bitra in order to calibrate the light based positioning system with more accuracy.
Claim 8-9, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication No. 2025/0301440 ("Lindquist") in view of Zhang ‘739.
Regarding claim 8, Lindquist discloses a method of training a machine learning model comprising:
training a model capable to provide localization information (position, paragraphs [0014]) based on radiofrequency measurements (paragraphs [0026], [0029], [0056], [0062]-[0063]) further using visible light information (additional sensor data such as light fingerprint, paragraph [0029]) detected at a location and known location information of the location at which the visible light information was detected (paragraphs [0029], [0047], [0064]),
the radiofrequency measurements being detected on tracked locations of a mobile unit (paragraphs [0040], [0043], [0056]),
the localization information (position, paragraphs [0014], [0045]) being derived based on a tracked current location of the mobile unit (paragraph [0046) and from information of a known location of installation of an identified light source (paragraphs [0029], [0032], [0046]-[0047], [0085]).
Lindquist does not explicitly disclose that the light data is visible light information including spectral information of visible light the mobile unit is exposed to.
However, Zhang ‘739 discloses the visible light information including spectral information (Fig. 5) of visible light the mobile unit is exposed to (paragraph [0067]),
It would have been obvious to one of ordinary skill in the art before the effective filing date to use spectral information of visible light that the mobile unit is exposed to as disclosed by Zhang ‘739 in view of Lindquist in order to distinguish nearby light sources from each other given their unique signature as a result of manufacturing variations.
Regarding claim 9, Lindquist in view of Zhang ‘739 discloses the method of claim 8, and Lindquist further discloses training the model to be capable to provide localization information based on radiofrequency measurements (Wi-Fi, paragraph [0029], [0087]) obtained at measurement locations and on location information of the measurement locations (paragraph [0085]).
Regarding claim 17, Lindquist in view of Zhang ‘739 discloses the machine learning model trained according to the method of claim 8, and Lindquist further discloses a mobile unit (300, Fig. 9, can be a mobile phone, see paragraph [0135]) comprising the machine learning model trained (paragraph [0316]).
Allowable Subject Matter
Claim 18 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The invention as claimed, specifically in combination with: determining the signature by calculating at least one of a ratio of powers provided by the light source at two or more of the predetermined wavelengths, or a difference of the powers provided by the light source at two or more of the predetermined wavelengths, is not taught made obvious by the prior art of record.
Response to Arguments
Applicant’s arguments with respect to claims 1, 6, 8, 10, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MONICA T. TABA whose telephone number is (571)272-1583. The examiner can normally be reached Monday - Friday 9 am - 6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Georgia Epps can be reached at 571-272-2328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MONICA T TABA/Examiner, Art Unit 2878