DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 1/2/2026, with respect to claims 1-20 have been fully considered but are moot in view new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 4-7, 11-17 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coleman et al. (PGPUB Document No. US 2016/0314623) in view of Palomo et al. (PGPUB Document No. US 2020/0234045).
Regarding claim 1, Coleman teaches a system, comprising:
At least one processor (the AR device of (Coleman: 0006) requires a processor);
And at least one memory including program code which when executed by the at least one processor causes operations comprising (the AR device of (Coleman: 0006) requires some form of memory storing instructions enabling the AR software of Coleman);
Detecting a physical object in a video stream provided by a camera of an extended reality device providing an extended reality environment (“The augmented reality overlay device 26 has used its cameras to identify the configuration of the tool 90 and the portion of the tool 90 being viewed by the wearer” (Coleman: 0088, FIG.17) & “he band saw may be configured for vertical operation, which would be recognized by the device 26” (Coleman: 0089));
In response to the detecting, extracting context information from at least a portion of the video stream associated with the physical object (recognizing the tools above based on a “video camera with shape recognition capability for sensing objects in the view of the wearer” (Coleman: 0062));
Querying, using the extracted context information, a system including a database to obtain at least one task and/or at least one document object that are associated with the extracted context information (“the augmented reality overlay device 26 indicates that information on operating the band saw 90 in the viewed position may be found on page 8 of the manual 92, as indicated at 94” (Coleman: 0088, FIG.17) & “the band saw may be configured for vertical operation, which would be recognized by the device 26 and the corresponding pages of the operations manual would be displayed” (Coleman: 0089));
In response to the querying, receiving the at least one task and/or the at least one document object that are associated with the extracted context information (the corresponding portion of the manual that is found by the system of Coleman (Coleman: 0088, FIG.17));
And in response to receiving, providing to the extended reality device the at least one task and/or the at least one document object to cause the extended reality device to augment, based on the extracted context information from the physical object, the extended reality environment (the resulting AR view shown in FIG.17 comprising of the tool (object) and the corresponding portion of the manual 92 (Coleman: 0088, FIG.17)).
However, the combined teachings above do not expressly teach but Palomo teaches the detected physical object being a document, and the extracted context information being textual information (Palomo teaches the concept of an AR system that recognize text and display a corresponding AR overlay (Palomo: 0021, 0027)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to enable recognizing text for displaying AR overlays as taught by Palomo, because this enables an added variety of AR content.
Regarding claim 2, the combined teachings teach the system of claim 1, wherein the operations further comprise converting the extracted textual information into a machine-readable format (parsing text using optical character recognition (Palomo: 0017)); and using the converted textual information to query the system to obtain the at least one task and/or the at least one document object (retrieving potential imagery from various sources (Palomo: 0018).
Regarding claim 4, the combined teachings teach the system of claim 1, wherein the extended reality environment is augmented by presenting on a display of the extended reality device the detected physical document (text such as a menu or assembly instructions (Palomo: 0021, 0027)) and at least one digital overlay presenting the at least one task and/or the at least one document object (displaying a menu item corresponding to the menu (Palomo: 0021)).
Regarding claim 5, the combined teachings teach the system of claim 4, wherein the extended reality environment presents via a display comprised in the extended reality device a plurality of physical objects including the detected physical document (identification of text (Palomo: 0021, 0027)) and the at least one digital overlay presenting the at least one task and/or the at least one document object (displaying a menu item corresponding to the menu (Palomo: 0021)).
Regarding claim 6, the combined teachings teach the system of claim 5, wherein the at least one document object comprises at least one electronic document stored in the database (manual 92 is available for viewing via the augmented reality overlay device 26 (Coleman: 0088, FIG.17)), and wherein the at least one task is part of a workflow associated with the at least one document object.
Regarding claim 7, the combined teachings teach the system of claim 1, wherein the extended reality device comprises at least one of a head-mounted display, a headset, a haptic controller, a smart phone, a computer including a display, and augmented reality glasses (“An example of a wearable display is the Google Glass head-mounted display. Other examples of a display include the Leap Motion display device, the Microsoft Holo Lens holographic display, or the Epson projector glasses” (Coleman: 0017)).
Claim(s) 11-17 are corresponding method claim(s) of claim(s) 1-7. The limitations of claim(s) 11-17 are substantially similar to the limitations of claim(s) 1-7. Therefore, it has been analyzed and rejected substantially similar to claim(s) 11-17.
Claim(s) 20 is a corresponding computer-readable medium claim(s) of claim(s) 1. The limitations of claim(s) 20 are substantially similar to the limitations of claim(s) 1. Therefore, it has been analyzed and rejected substantially similar to claim(s) 20. Note, the Examiner submits that the wearable display 16 (Coleman: 0051, FIG.1) requires a computer-readable medium to carry out the functions disclosed by Coleman.
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coleman in view of Palomo as applied to the claim(s) above, and further in view of Ivers et al. (PGPUB Document No. US 2017/0186230).
Regarding claim 3, the combined teachings above teach the system of claim 1, wherein the extracting of the textual information from at least a portion of the video stream associated with the physical document comprises:
detecting a finger of a user of the extended reality device pointing at a location on the physical document; and extracting the textual information in the location on the physical document at which the finger of the user is pointing (translating text corresponding to where the finger is pointing at (Ivers: 0025)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to allow user input in the manner taught by Ivers, because this enables an intuitive method of interacting within the AR environment .
Claim(s) 8, 10 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coleman in view of Palomo as applied to the claim(s) above, and further in view of Haapoja et al. (PGPUB Document No. US 2022/0092857).
Regarding claim 8, the combined teachings above teach the system of claim 1, wherein the extracted context information comprises model or type information (“recognizing the tool 52 as a table saw and particularly this model or type of table saw“ (Coleman: 0069)).
However, Coleman does not expressly teach but Haapoja teaches the model or type information comprising at least one of a file number, a reference number, a process reference number, an invoice number, a purchase order number, an order number, a shipping tracking number, and a line item number (An AR system recognizing QR codes on the real-world item 850 to help identify the product number and/or the merchant (Haapoja: 0166)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to recognize the tools utilizing the QR code teaching of Haapoja, because this enables providing high precision and stable recognition of real world objects.
Regarding claim 10, Coleman does not expressly teach but Haapoja teaches the system of claim 1, wherein the detecting of the physical object uses a machine readable code on the physical object to detect the physical object (QR code (Haapoja: 0166)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the teachings of Coleman such as to recognize the tools utilizing the QR code teaching of Haapoja, because this enables providing high precision and stable recognition of real world objects.
Claim 18 is similar in scope to claim 8.
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Coleman in view of Palomo as applied to the claim(s) above, and further in view of Wiggeshoff (PGPUB Document No. US 2023/0098160).
Regarding claim 9, Coleman does not expressly teach but Wiggeshoff teaches the system of claim 1, wherein the detecting of the physical object uses a machine learning model to detect the physical document (“applying a machine learning model to recognize the type of the real world object” (Coleman: 0012)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the teachings of Coleman such as to recognize the tools utilizing the machine learning teaching of Wiggeshoff, because this enables enhanced accuracy and efficiency.
Claim 19 is similar in scope to claim 9.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616