Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,248

AUGMENTED REALITY SYSTEM

Final Rejection §103§112
Filed
Jun 11, 2024
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Araizen Co., Ltd.
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103 §112
DETAILED ACTION Response to Amendment Applicant’s amendments filed on 13 February 2026 have been entered. Claims 1, 2, 4, and 7 have been amended. 1-7 are still pending in this application, with claim 1 being independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-7 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding Claim 1, the amended claim recited “a visual positioning processor”. The specification of the current application recited “the positioning module 31 may be, for example, an assembly, in a smart phone, that obtains positioning information related to a geographical location by using a positioning system such as Global Positioning System (GPS), Beidou Navigation Satellite System (BDS), or Geographical Information Positioning (GEO).” The examiner couldn’t find support from neither specification nor the drawing of the visual position processor. The examiner also couldn’t associate the identification object with the possibly implied visual position processor. Therefore, giving the broadest interpretation in light of the specification, the amended claims rendered a 112(a) rejection as containing new matters. For examining purpose, the element of visual positioning processor will be omitted from the following prior art analysis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHANG et al. (US 20150187142 A1), referred herein as ZHANG in view of CHEN (US 20250124609 A1), referred herein as CHEN. Regarding Claim 1, ZHANG in view of CHEN teaches an augmented reality system, comprising (ZHANG Abst: A method for performing interaction based on augmented reality): a management platform (ZHANG [0021] The server 604 and the terminals 606 may be implemented on any appropriate computing platform; [0024] performing certain operations on the stored data, e.g., storing abnormal data, user IDs, and corresponding relationship(s) there between, or any other suitable data searching and management operations.); a first user terminal, communicating with the management platform and comprising (ZHANG [0017] The server 604 and the terminals 606 may be coupled through the communication network 602 for information exchange): a positioning system comprising a GPS receiver ZHANG [0032] the positioning module 31 may be, for example, an assembly, in a smart phone, that obtains positioning information related to a geographical location by using a positioning system such as Global Positioning System (GPS), Beidou Navigation Satellite System (BDS), or Geographical Information Positioning (GEO); [0048] as shown in FIG. 4, a system for performing interaction based on augmented reality technology includes an image obtaining module 210, a position obtaining module 220, a matching module 230, a displaying module 240 and an interacting module 250; [0051] The position obtaining module 220 is configured to obtain location information of the first terminal and geographical position of a second terminal); a camera, configured to capture an environmental image from a physical environment; (ZHANG [0048] as shown in FIG. 4, a system for performing interaction based on augmented reality technology includes an image obtaining module 210, a position obtaining module 220, a matching module 230, a displaying module 240 and an interacting module 250; [0029] Step S110, obtaining an image of a real scene photographed by a first terminal) and a display screen, configured to display a virtual placement according to the positioning information and physical address information corresponding to the environmental image, wherein the virtual placement comprises a first virtual object; (ZHANG [0048] as shown in FIG. 4, a system for performing interaction based on augmented reality technology includes an image obtaining module 210, a position obtaining module 220, a matching module 230, a displaying module 240 and an interacting module 250; [0060] The displaying module 240 includes a relative position corresponding unit, which is configured to display the information of the second terminals on the image according to the distance between the second terminals and the first terminal, such that relative positions of the information of the second terminals displayed on the image correspond with the second locations of the second terminals in real world… Furthermore, the displaying module 240 further includes a superimposition unit, by which information of different second terminals may be superimposed onto the image of the real scene in layers; [0063] an image is formed by photographing a real scene, and information of the successfully matched second terminal is displayed on the image; then interactions are performed according to needs. By displaying virtual second terminals on the image of the real scene in accordance with their geographical locations in the real world, interaction with the second terminals is facilitated); and ZHANG does not but Chen teaches a setting terminal, communicating with the management platform and comprising (CHEN [0019] The server 10 is operated by a trade platform (a service provider) to transmit and receive commands and/or data to organize exhibitions): a user interface, configured to input the physical address information, an identification image of an identification object, and the first virtual object (CHEN FIG. 4: Location of the exhibition site; FIG. 5: Upload 3-D image of the work of art; [0028] The work-of-art image database 183 is operable to store data about works of art. The data about each of the works of art include but not limited to basic identification data and at least one image of the work of art; [0029] The image-integrating database 185 is operable to generate and store the synthetic or VR images of the works of art on the exhibition sites; [0049] The interface 33 connected to the processor unit 31 is used to upload the data about the work of art. The image-taking device 335 of the interface 33 is used to take the real-scene image of the work of art), wherein the management platform sets the virtual placement according to the physical address information, the identification image, and the first virtual object (CHEN [0051] At S04, the server 10 is used to integrate the image of the work of art with the image of the exhibition site. Thus, a synthetic image of the work of art on the exhibition site is generated; [0052] At S05, the server 10 is used to announce exhibition of the work of art on the exhibition site. The exhibiting device 20 or the uploading device 30 runs the exhibition-managing program 110 to transfer a message of the exhibition of the work of art to the server 10). CHEN discloses a method for effectively exhibiting works of art on exhibition sites, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified ZHANG to incorporate the teachings of CHEN, and apply the exhibition-managing program into the method and system for performing interaction based on augmented reality. Doing so would be able to effectively exhibit works of art on exhibition sites by providing realistic images of works of art on exhibition sites. Regarding Claim 2, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the first user terminal further comprises a touch panel configured to input a second virtual object corresponding to the first virtual object (ZHANG [0044] information of different second terminals may be superimposed onto the image of the real scene in layers, which may overlapped according to the different distances, with information of the second terminal closer to the first terminal positioned as upper layer, and information of the second terminal further from the first terminal positioned as lower layer; [0019] A terminal, as used herein, may refer to any appropriate user terminal with certain computing capabilities, e.g., a hand-held computing device (e.g., a tablet), a mobile terminal (e.g., a mobile phone or a smart phone), or any other client-side computing device). Regarding Claim 3, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the first user terminal receives a proximity notification from the management platform based on a location-based service (CHEN [0053] At S06, at least one of the viewing devices 50 is used to search for the exhibition. The viewing device 50 is used to search for the exhibition. In an embodiment, the viewing device 50 receives a notification of the exhibition. The exhibition-managing program 110 is executed to set a type or a creator of works of art, an exhibition site or a point of time for example to search for the message of the exhibition in the server 10). Regarding Claim 4, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the camera scans the identification object to display the virtual placement (CHEN [0036] The sensor module 24 is connected to the processor unit 21. For instance, the sensor module 24 includes an infrared transmitter, a high-frequency wireless communication element (NFC), a blue tooth transceiver, radio tag, a barcode scanner, a barcode or a barcode scanner for example. The sensor module 24 preferably includes multiple sensors arranged in various positions in and/or out of the exhibition site, on or in exhibited works of art. The sensor module 24 allows the provider of the exhibition site to interact with viewers who use the viewing devices 50 to view the exhibited works of art or read data about the exhibited works of art). Regarding Claim 5, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the first user terminal transmits shared information, related to the first virtual object, to a second user terminal (ZHANG [0046] The user may select the second terminal to interact with according to the displayed information of the second terminals, including avatar, distance, updates and location, etc. Then, the user may have group activities conveniently and quickly by clicking the avatar of the second terminal, concerning about recent updates of the second terminal, leaving a message or having voice conversation, or viewing the frequent contacting second terminal nearby, so as to set up groups. Upon receiving the command from the user, the first terminal can send text, voice, and video or image message to the corresponding second terminal, so as to realize interaction). Regarding Claim 6, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the management platform converts the physical address information into positioning information (CHEN [0033] The electronic map generator 19 is connected to the processor module 11. The electronic map generator 19 is operable to generate an electronic map according to the locations of the exhibiting devices 20, the uploading devices 30 and the viewing devices 50. The electronic map can be shown on a home page handled by the exhibition-managing program 110 to help views use the viewing devices 50 to reach the exhibition sites). Regarding Claim 7, ZHANG in view of CHEN teaches the augmented reality system according to claim 1, and further teaches wherein the user interface is configured to input a display time period, and the display screen displays the virtual placement according to the display time period (CHEN FIG. 6: Period of time; [0030] The exhibition message database 186 is operable to store data about exhibitions of works of art on exhibition sites. The data about the exhibition of a work of art on an exhibition site includes but not limited to the work of art exhibited or to be exhibited, at least one creator of the work of art exhibited or to be exhibited, the period of time of the exhibition, and the location of a related exhibition site). Response to Arguments Applicant's arguments filed on 13 February 2026, with respect to the 103 rejection have been fully considered but they are not persuasive. On page 4, Applicant's Remarks, with respect to claim 1, the applicant argues (t)he cited references do not disclose the technical mechanism by which the present application uses both physical address information and identification image for dual localization. The examiner respectfully disagrees with this argument. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., uses both physical address information and identification image for dual localization) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Combining with the 112(a) new matter rejection, regarding the first argument, it is respectfully noted that ZHANG in view of CHEN teaches all the limitations of claimed, as amended. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Nov 24, 2025
Non-Final Rejection — §103, §112
Feb 13, 2026
Response Filed
Mar 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month