Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,158

SEMANTIC INFORMATION RETRIEVAL METHOD FOR AUGMENTED REALITY DOMAIN AND DEVICE THEREOF

Non-Final OA §103
Filed
Aug 28, 2024
Examiner
CHIN, MICHELLE
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
540 granted / 634 resolved
+23.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
29 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
70.6%
+30.6% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 634 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 2. Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 08/28/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 7. Claim(s) 1, 2 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka et al. (US 2020/0342668 A1) in view of Engel et al. (US 2023/0377283 A1). 8. With reference to claim 1, Chojnacka teaches A semantic information retrieval method of a computer device comprising at least one processor, (“a system may include at least one computing device, including a memory storing executable instructions, and a processor configured to execute the instructions.” [0007] “In a system and method, in accordance with implementations described herein, as images are streamed through, for example, a camera of an electronic device, the image frames may be fed through an auto-completion algorithm, or model, to gain a semantic understanding of the physical, real world environment, and in particular, 3D pose and location information of real object(s) in the physical, real world environment.” [0020]) Chojnacka also teaches performing semantic information retrieval in an AR (Augmented Reality) domain by the at least one processor. (“The computing device 600 may also include a processor 690 in communication with the user interface system 620, the sensing system 640 and the control system 680, a memory 685, and a communication module 695. The communication module 695 may provide for communication between the electronic device 600 and other, external devices, external data sources, databases, and the like, through a network. A method 700 of generating an augmented reality, or mixed reality environment, and providing for user interaction with virtual objects presented in a camera view, or scene, of a physical environment, in accordance with implementations described herein, is shown in FIG. 7. A user may initiate an augmented reality, or mixed reality, or virtual reality experience, through, for example, an application executing on a computing device to display an augmented reality, or a mixed reality scene including a view of a physical environment (block 710). The augmented/mixed reality scene including the view of the physical environment may be, for example, a camera view of the physical environment captured by an imaging device of the computing device, and displayed on a display device of the computing device. Physical objects detected, for example, through a scan of the physical environment (block 720) may be analyzed for recognition/identification (block 730). Based on the physical objects identified in the physical environment, a semantic understanding of the physical environment may be developed (block 740). Based on the semantic understanding developed in the analysis of the physical objects detected in the physical environment, appropriate contextual objects may be selected for the physical environment, and virtual representations of the suggested objects may be placed in the AR/MR scene of the physical environment (block 750).” [0046-0047]) PNG media_image1.png 830 424 media_image1.png Greyscale Chojnacka does not explicitly teach using AR ontology consisting of AR concepts. This is what Engel teaches (“The knowledge base 302 and the ontology 304 are complementary systems for organizing settings utilized by the procedural guidance logic 306 to control renderings by the augmented reality device 308. In the knowledge base 302, settings may be organized with table structure and ‘references’ (to other tables). In the ontology 304, settings may be organized by applying ‘terms’ and ‘relations’. The ontology 304 may be part of a database, or may be accessed independently. The amount of overlap between the two information sub-systems is customizable based on how the overall augmented reality system is designed.” [0046]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Engel into Chojnacka, in order to enable presentation of content that includes interactive procedural content or human-in-the-loop content that provides procedural guidance. 9. With reference to claim 2, Chojnacka does not explicitly teach the AR ontology comprises hardware, software, tracking, interaction, and interface related to AR. This is what Engel teaches (“The knowledge base 302 and the ontology 304 are complementary systems for organizing settings utilized by the procedural guidance logic 306 to control renderings by the augmented reality device 308. In the knowledge base 302, settings may be organized with table structure and ‘references’ (to other tables). In the ontology 304, settings may be organized by applying ‘terms’ and ‘relations’. The ontology 304 may be part of a database, or may be accessed independently. The amount of overlap between the two information sub-systems is customizable based on how the overall augmented reality system is designed.” [0046] “The system also utilizes the ontology 304 to enable operation of a human-in-the-loop AR interactive procedural guidance system. In one aspect, the ontology 304 enables operation of the procedural guidance logic 306 by providing a query-friendly structure for relevant knowledge including knowledge in the knowledge base 302.” [0056] “The augmented reality headset logic 1000 comprises a graphics engine 1002, a camera 1004, processing units 1006, including one or more CPU 1008 (central processing unit) and/or GPU 1010 (graphics processing unit), a WiFi 1012 wireless interface, a Bluetooth 1014 wireless interface, speakers 1016, microphones 1018, and one or more memory 1020.” [0099] “FIG. 11 depicts an embodiment of additional components of augmented reality headset logic 1100 including a rendering engine 1102, local augmentation logic 1104, local modeling logic 1106, a rendering engine 1102, device tracking logic 1108, an encoder 1110, and a decoder 1112. Each of these functional components may be implemented in software, dedicated hardware, firmware, or a combination of these logic types.” [0103]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Engel into Chojnacka, in order to enable presentation of content that includes interactive procedural content or human-in-the-loop content that provides procedural guidance. 10. With reference to claim 11, Chojnacka teaches A non-transitory computer-readable recording medium storing program instructions to execute the semantic information retrieval method of claim 1 in the computer device. (“In a system and method, in accordance with implementations described herein, as images are streamed through, for example, a camera of an electronic device, the image frames may be fed through an auto-completion algorithm, or model, to gain a semantic understanding of the physical, real world environment, and in particular, 3D pose and location information of real object(s) in the physical, real world environment.” [0020] “The storage device 2006 is capable of providing mass storage for the computing device 2000. In one implementation, the storage device 2006 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002.” [0052]) 11. Claim 12 is similar in scope to claim 1, and thus is rejected under similar rationale. Chojnacka additionally teaches A computer device, comprising at least one processor implemented to execute instructions readable in a computer device (“a system may include at least one computing device, including a memory storing executable instructions, and a processor configured to execute the instructions.” [0007] “The storage device 2006 is capable of providing mass storage for the computing device 2000. In one implementation, the storage device 2006 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 2004, the storage device 2006, or memory on processor 2002.” [0052]) 12. Claim 13 is similar in scope to claim 2, and thus is rejected under similar rationale. 13. Claim(s) 3, 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chojnacka et al. (US 2020/0342668 A1) and Engel et al. (US 2023/0377283 A1), as applied to claims 1 and 12 above, and further in view of Geller et al. (US 2013/0013580A1). 14. With reference to claim 3, Chojnacka does not explicitly teach the performing comprises categorizing web crawler results corresponding to search queries into AR fields through the AR ontology. This is what Engel teaches. Engel teaches the performing comprises into AR fields through the AR ontology (“The knowledge base 302 and the ontology 304 are complementary systems for organizing settings utilized by the procedural guidance logic 306 to control renderings by the augmented reality device 308. In the knowledge base 302, settings may be organized with table structure and ‘references’ (to other tables). In the ontology 304, settings may be organized by applying ‘terms’ and ‘relations’. The ontology 304 may be part of a database, or may be accessed independently. The amount of overlap between the two information sub-systems is customizable based on how the overall augmented reality system is designed.” [0046]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Engel into Chojnacka, in order to enable presentation of content that includes interactive procedural content or human-in-the-loop content that provides procedural guidance. The combination of Chojnacka and Engel does not explicitly teach categorizing web crawler results corresponding to search queries. This is what Geller teaches Geller (“in one embodiment a method of building an ontology includes querying a search engine API with common terms, extracting from the results generated by the search engine terms of interest, assigning the terms of interest to a top value category, querying a separate database with the term(s), saving type and relationship data for term(s) found in the separate database, removing all terms not correlated to the selected term type, creating mappings for disambiguation tags, assigning terms to an ontology type, analyzing the types of relationships for each type, and retaining the most common relationships for each type.” [0090] “the manner in which search suggestions are generated may be based on the lack of certain relationship information from an instance, rather than the presence of such relationship. If a relationship is within a class's domain, but a given instance does not have any target for it, the system may provide a search suggestion in the form of [Instance Name] [Relationship Name]. For example, the system displays the suggestion "Kurt Cobain song," even though there is no song information for Kurt Cobain stored in the ontology (as there is no song information in DBpedia for Kurt Cobain).” [0093]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Geller into the combination of Chojnacka and Engel d, in order to improve the search results. 15. With reference to claim 4, Chojnacka does not explicitly teach the performing comprises clustering AR documents into AR related topics or concepts by using the AR ontology. This is what Engel teaches. Engel teaches the performing comprises AR and by using the AR ontology. (“The knowledge base 302 and the ontology 304 are complementary systems for organizing settings utilized by the procedural guidance logic 306 to control renderings by the augmented reality device 308. In the knowledge base 302, settings may be organized with table structure and ‘references’ (to other tables). In the ontology 304, settings may be organized by applying ‘terms’ and ‘relations’. The ontology 304 may be part of a database, or may be accessed independently. The amount of overlap between the two information sub-systems is customizable based on how the overall augmented reality system is designed.” [0046]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Engel into Chojnacka, in order to enable presentation of content that includes interactive procedural content or human-in-the-loop content that provides procedural guidance. The combination of Chojnacka and Engel does not explicitly teach clustering documents into related topics or concepts. This is what Geller teaches Geller (“in one embodiment a method of building an ontology includes querying a search engine API with common terms, extracting from the results generated by the search engine terms of interest, assigning the terms of interest to a top value category, querying a separate database with the term(s), saving type and relationship data for term(s) found in the separate database, removing all terms not correlated to the selected term type, creating mappings for disambiguation tags, assigning terms to an ontology type, analyzing the types of relationships for each type, and retaining the most common relationships for each type.” [0090]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Geller into the combination of Chojnacka and Engel d, in order to improve the search results. 16. Claim 14 is similar in scope to claim 4, and thus is rejected under similar rationale. Allowable Subject Matter 17. Claims 5-10 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is an examiner’s statement of reasons for allowance: Regarding claims 5 and 15, the prior art of record fails to either individually or in combination teach the claimed feature of “representing AR documents as vectors according to concepts of the AR ontology; and clustering AR document vectors into the AR domain.” Claims 6-10 are also objected to for depending from claim 5. Conclusion 18. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michelle Chin whose telephone number is (571)270-3697. The examiner can normally be reached on Monday-Friday 8:00 AM-4:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/Awww.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Kent Chang can be reached on (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https:/Awww.uspto.gov/patents/apply/patent- center for more information about Patent Center and https:/Awww.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE CHIN/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Aug 28, 2024
Application Filed
Mar 14, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602870
COMPUTER-AIDED TECHNIQUES FOR DESIGNING 3D SURFACES BASED ON GRADIENT SPECIFICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597205
HYBRID GPU-CPU APPROACH FOR MESH GENERATION AND ADAPTIVE MESH REFINEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592041
MIXED SHEET EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12586287
Method of Operating Shared GPU Resource and a Shared GPU Device
2y 5m to grant Granted Mar 24, 2026
Patent 12579700
METHODS OF IMPERSONATION IN STREAMING MEDIA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
97%
With Interview (+11.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 634 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month