DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 8, 10, 11, 13, and 14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Firintepe (US 20220113546 A1).
Regarding claim 8, Firintepe teaches A method for determining a pose of at least one pair of smart glasses in an interior of a mobile machine, comprising:
capturing a camera image of an interior of the mobile machine ([0038]-[0039], where the system has a capturing unit used for determining orientation of AR glasses with a passenger compartment);
determining a relevant glasses type for at least one pair of smart glasses imaged in the camera image by evaluating a data-based glasses-type recognition model on the basis of the captured camera image, wherein the data-based glasses-type recognition model is trained to assign a camera image to a relevant glasses type of one or more pairs of smart glasses (Fig. 2, [0041]-[0042], where the wearable device is recognized based on existing training data and a deep learning model);
selecting, in each case, one of a plurality of data-based pose recognition models depending on the recognized glasses type, wherein the data-based pose recognition models are each trained for one glasses type, in order to determine a glasses pose of the corresponding pair of smart glasses according to the camera image (Fig. 2, [0041]-[0042], where the deep learning model is further used to determine the pose of the smart glasses);
determining, in each case, the glasses pose of the pair of smart glasses of the corresponding glasses type by means of the selected pose recognition model (Fig. 3, [0045]-[0046], where the pose of the smart glasses is determined).
Regarding claim 10, Firintepe teaches the method of claim 8, wherein the pose of the at least one pair of smart glasses is transmitted to the relevant smart glasses so that a contact analogue display can be displayed in the relevant smart glasses depending on the pose of the glasses ([0019]-[0020], [0046]-[0047], where the pose is transmitted to the smart glasses).
Regarding claim 11, Firintepe teaches the method of claim 8, wherein model parameters of the selected pose recognition model are retrieved from a database and implemented in order to provide the corresponding pose recognition model (Fig. 2, [0041]-[0042], where the wearable device is recognized based on existing training data built using a deep learning model which constitutes a database).
Regarding claim 13, Firintepe teaches a device for determining the pose of at least one pair of smart glasses in an interior of a mobile machine, comprising:
an interior camera, which is designed to capture a camera image of an interior of the mobile machine ([0038]-[0039], where the system has a capturing unit used for determining orientation of AR glasses with a passenger compartment);
a processor unit configured to:
determine a relevant glasses type for at least one pair of smart glasses imaged in the camera image via evaluating a data-based glasses-type recognition model on the basis of the captured camera image, wherein the data-based glasses-type recognition model is trained to assign a camera image to a relevant glasses type of one or more pairs of smart glasses (Fig. 2, [0041]-[0042], where the wearable device is recognized based on existing training data and a deep learning model),
select, in each case, one of a plurality of data-based pose recognition models depending on the recognized glasses type, wherein the data-based pose recognition models are each trained for a specific glasses type, in order to determine a glasses pose of the corresponding pair of smart glasses according to the camera image (Fig. 2, [0041]-[0042], where the deep learning model is further used to determine the pose of the smart glasses), and
determine, in each case, the glasses pose of the pair of smart glasses of the corresponding glasses type by means of the selected pose recognition model (Fig. 3, [0045]-[0046], where the pose of the smart glasses is determined).
Regarding claim 14, Firintepe teaches the device of claim 13, wherein a communications unit is configured to transmit the glasses pose to the smart glasses ([0019]-[0020], [0046]-[0047], where the pose is transmitted to the smart glasses).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Firintepe (US 20220113546 A1) in view of Balaji et al. (US 20210321894 A1, hereafter Balaji).
Regarding claim 9, Firintepe would show the method of claim 8. But, Firintepe does not explicitly teach the method wherein a bounding box of the recognized smart glasses is furthermore determined by means of the glasses-type recognition model, wherein the relevant pose recognition model is applied to a detail of the camera image that is determined by the bounding box. However, this was well known in the art as evidenced by Balaji (Figs. 3A-4C, [0055]-[0058], where a bounding box is used for extraction of information about a user’s pose). Firintepe teaches using a deep learning model to extract pose information but does not explicitly teach the use of a bounding box. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Balaji into Firintepe’s method and that such an incorporation would yield a predictable result.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Firintepe (US 20220113546 A1) in view of Rabinovich et al. (US 20180053056 A1, hereafter Rabinovich).
Regarding claim 12, Firintepe would show the method of claim 8. But, Firintepe does not further teach the method wherein the model parameters of the pose recognition model are retrieved from a cloud database depending on a recognized glasses type. However, this was well known in the art as evidenced by Rabinovich ([0083], where the AR system has access to cloud resources for a mapping database of models). Both Firintepe and Rabinovich teach the use of databases of existing models. Firintepe is silent with respect to the use of a remote or cloud database for models. It would have been obvious to one of ordinary skill in the art before the effective filing to enable the device of Firintepe to draw databases from a remote resource as taught by Rabinovich to expand and update the library of objects to be modeled.
Response to Arguments
Applicant's arguments filed 2/24/2026 have been fully considered but they are not persuasive.
On page 5 of Applicant’s arguments/remarks, Applicant asserts that “Firintepe does not describe determining a relevant glasses type or doing so by evaluating a data-based glasses-type that is trained to assign a camera image to relevant glasses-type,” alleging that Firintepe is “glasses type agnostic” and Firintepe does not “say anything about a glasses-type recognition model that is trained to assign a camera image to a relevant glasses type.” However, the cited portions of Firintepe explicitly state that “[t]raining with different faces and smart glasses may in this case make it possible to track different persons and smart glasses” and that it becomes “possible to recognize and locate various persons with different data glasses as soon as they become visible in the camera image.” Firintepe cannot be said to be “glasses type agnostic” where it explicitly teaches recognition of different smart glasses. Applicant’s remaining arguments on pages 5 and 6 further repeat that Firintepe “is silent about a plurality of models” despite the cited portion teaching that Firintepe collects image data to discriminate between and track different smart glasses.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER D MCLOONE whose telephone number is (571)272-4631. The examiner can normally be reached M-F 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 5712727764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PETER D MCLOONE/Primary Examiner, Art Unit 2621