Prosecution Insights
Last updated: April 19, 2026
Application No. 18/455,507

AUGMENTED REALITY METHOD AND RELATED DEVICE

Final Rejection §102§103
Filed
Aug 24, 2023
Examiner
CRADDOCK, ROBERT J
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Huawei Technologies Co., Ltd.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
519 granted / 616 resolved
+22.3% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
27 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
24.3%
-15.7% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 616 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The applicant’s arguments on page 8 regarding the previous claim objections are persuasive as the amendments address the objections. Regarding the applicant’s arguments on page 6-7,” Claim 1, as amended, recites "in response to a first operation of a user on an application of a terminal device, displaying a target image" and "in response to a second operation of the user on the application, presenting a virtual object in the target image, wherein the second operation is an operation of adding special effect on the application". Therefore, after displaying the target image in response to a first operation of the user on an application of a terminal device, claim 1 then presents a virtual object in the target image in response to the second operation of the user on the application to add special effect on the application. For example, paragraph [0198] of the specification states that the user operates the terminal device again, so that the terminal device renders a virtual wing near the human presented by the target image. Thus, when the user inputs an operation of adding special effect (for example, a specific virtual fin) to the application on the terminal device, the terminal device may render the virtual wing on a back of the human in the target image. by the target image. Thus, when the user inputs an operation of adding special effect (for example, a specific virtual fin) to the application on the terminal device, the terminal device may render the virtual wing on a back of the human in the target image. Holzer, in paragraph [0047] and Fig. 1, discloses at most only one operation 102 of a user for selecting real objects in an image. Holzer does not disclose a second operation of the user for displaying virtual objects. In fact, in Figure 1 of Holzer, element 100 is a multi-view interactive digital media representation (MVIDMR) acquisition system, element 116 is an enhancement algorithm, and element 118 is an MVIDMR. (See e.g., Holzer, para. [0043], [0048] and [0059]). None of the Holzer MVIDMR acquisition system 100, enhancement algorithm 116 and MVIDMR 118 is an operation on an application of a terminal device, let alone "a second operation" that is "an operation of adding special effect on the application" as specified by claim 1. Therefore, Holzer fails to disclose a second operation of the user for displaying virtual objects, and presenting a virtual object in response to the second operation of the user to add special effect on the application. In addition, the Office Action recognizes that Holzer does not disclose the virtual object is overlaid on the first object. The Office Action instead points to Figs. 5A-5C of Barron as disclosing such. (Office Action, p. 4). As discussed above, the distinguishing technical features between amended claim 1 and Holzer are summarized as follows: in response to a second operation of the user on the application, presenting a virtual object in the target image, wherein the virtual object is overlaid on the first object and the second operation is an operation of adding special effect on the application. The distinguishing technical features give rise to the technical effect that after displaying a target image in response to a first operation of a user on an application of a terminal device, the terminal device then renders a virtual object on the first object (for example, a back of the human) in the target image in an overlay manner in response to a second operation of adding special effect (for example, a specific virtual fin) to the application. Thus, the objective technical problem to be solved is how to render a virtual object based on the user's needs on the application.” In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e.,” For example, paragraph [0198] of the specification states that the user operates the terminal device again, so that the terminal device renders a virtual wing near the human presented For example, paragraph [0198] of the specification states that the user operates the terminal device again, so that the terminal device renders a virtual wing near the human presented”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e.,” […] a second operation of the user for displaying virtual objects, and presenting a virtual object in response to the second operation of the user to add special effect on the application.”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). On page 10 the applicant argues, “Accordingly, in claim 5, "a first object", "a second object" and "a third object" are independent of each other, they are different objects. For example, the second object is a reference object of the first object. And obtaining a pose variation based on the location information of the first object relative to the second object, obtaining fourth location information of the third object based on the pose variation, and rendering the third object in the target image based on the fourth location information. In Holzer, head 1006a, shoulder 1009 and knee 1015, as illustrated in Fig. 10 of Holzer, belong to a same object, which is a human. Holzer renders the head 1006a, the shoulder 1009, and the knee 1015 simultaneously when there is a change in sitting/standing/moving. There is no calculation of the position of the knee 1015 base on the head 1006A and the shoulder 1009, and updating the position of the knee 1015 based on the calculation of the position.” To which the examiner respectfully disagrees. The examiner notes an object being independent or not independent doesn’t distinguishes the claims over the prior art. Under this rationale the applicant’s arguments are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “[…] are independent of each other, they are different objects.”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Under this rationale the applicant’s arguments are not persuasive. Allowable Subject Matter Claims 8-12 and 14-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-4 and 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Holzer et al. (US 20190116322 A1), as cited in an IDS in view of Barron et al. (US 11423652 B2). Regarding claim 1, Holzer teaches a method of augmented reality (See abstract: a method, and is describing augmented reality), comprising: in response to a first operation of a user on an application of a terminal device (See Fig. 1 element 100 or 102, ¶47, “ For instance, if a dominant object is detected in a series of images, this object can be selected as the content. In other examples, a user specified target 102 can be chosen, as shown in FIG. 1.” Also see below annotation for Fig. 1, the first operation is considered to 102 or 100, that is 102 may be starting of an operation or 100 may be starting an application. The MVIDMR Acquistion System 100 may be considered to be an application. Figure 8A and 8B, ¶94-95, mobile device is intpreted as a terminal device.), displaying a target image, wherein a first object is presented in the target image (See ¶61, Fig. 15, element 1520: display); and the first object is an object in a real environment (See Fig. 10, the first object can be any part of a person or a person in general. The person in Figure 10, is in a real environment. ¶149); in response to a second operation of the user on the application (See Fig. 1, the second operation of the user can be 116 or 118. See above for application.), presenting a virtual object in the target image (¶208-¶220, Fig. 14A: Fig. 14A shows a virtual object in the target object. The wings is considered to be a virtual object.)[...] and the second operation is an operation of adding special effect on the application (See Fig. 1 element 100 or 102 first operation that is starting an application, element 116 or 118 second operation are considered to add special effect.); and in response to movement of the first object, updating a pose of the first object in the target image, and updating a pose of the virtual object in the target image, to obtain a new target image, wherein the pose of the first object and the pose of the virtual object each comprises a location and an orientation, and an the orientation of the virtual object is associated with an the orientation of the first object (See ¶208-¶220, Fig. 14A. As per ¶218, “The wing effect 1408c is complete in size. The orientation of the wing effect is slightly changed between the images 1402c and 1402d as the orientation of the person has changed. As is shown in images 1404e, 1404f and 1404g, the orientation of the wings changes as the orientation in the person changes in the images.” Said another way, a new target image may be any of the updated images when the orientation of the wings change between 1402C and 102d as the orientation of the person changes or also shown in 1404e or 1404f or 1404g.) but doesn’t explicitly disclose wherein the virtual object is overlaid on the first object. Barron teaches wherein the virtual object is overlaid on the first object (See Fig. 5A-5C, the first object is the eyelid: in figure 5A-C the virtual object is considered to be the AR effect, in this instance makeup. The AR effect from the aforementioned figures is element 219A-C). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Holzer in view of Barron to combing the prior art elements according to known methods to yield predicable results. Furthermore the virtual-try on of Barron allow a user to explore various products, adjust sizes and positions, and find what best suits their personal style and preferences. PNG media_image1.png 802 620 media_image1.png Greyscale Figure 1: with annotations Regarding claim 2, Holzer in view of Barron teaches the method according to claim 1, wherein in the new target image, the orientation of the virtual object is the same as the orientation of the first object, and a relative location between the virtual object and the first object remains unchanged after the pose of the first object and the pose of the virtual object are updated (See ¶208-¶220, Fig. 14A: As per ¶218, “The wing effect 1408c is complete in size. The orientation of the wing effect is slightly changed between the images 1402c and 1402d as the orientation of the person has changed. As is shown in images 1404e, 1404f and 1404g, the orientation of the wings changes as the orientation in the person changes in the images.” Said another way the wing’s relative location remains unchanged as the wing’s orientation changes with the person.). Regarding claim 3, Holzer in view of Barron teaches the method according to claim 1, wherein the first object is a human, and the virtual object is a virtual wing (See ¶208-¶220, Fig. 14A: wings are virtual object. The first object is the person). Regarding claim 4, Holzer in view of Barron teaches the method according to claim 1, wherein the first operation is an operation of starting an application (See Fig. 1 element 100 or 102 first operation that is starting an application, element 116 or 118 second operation are considered to add special effect.). Regarding claim 19, Holzer in view of Barron teaches an augmented reality apparatus, comprising; a processor; and a memory coupled to the processor to store instructions, which when executed by the processor, cause the augmented reality apparatus to perform the method according to claim 1 (See ¶264-266,¶287.See ¶266, describes a processor, that is considered to be coupled to a memory. ¶264 descries the MVIDMR, which carries out the broad overall invention of Holzer, including the augmented reality aspect.). Regarding claim 20, Holzer in view of Barron teaches a non-transitory computer readable medium having, a computer program stored therein, which when by a computer, cause the computer to perform the method according to claim 1 (See ¶264-266: see rejection of claim 19 above. ¶284, program instructions for carrying the invention of Holzer). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 5, 6, 7 and 13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Holzer et al. (US 20190116322 A1), as cited in an IDS. Regarding claim 5, Holzer teaches a method of augmented reality, comprising (See abstract: a method, and is describing augmented reality): obtaining a target image (See Fig. 1 element 100 or 102, ¶47, “ For instance, if a dominant object is detected in a series of images, this object can be selected as the content. In other examples, a user specified target 102 can be chosen, as shown in FIG. 1.”) and first location information of a first object in the target image (See Fig. 10, ¶164-¶174: the first location can be a location 1006A head in Fig. 10); obtaining second location information of a second object (See Fig. 10, ¶164-¶174: the second location can be a location 1009 shoulder in Fig. 10) in a three-dimensional coordinate system (¶165: joints are expressed in 3d coordinates) and third location information of a third object (See Fig. 10, ¶164-¶174: the third location can be a location 1015 knee in Fig. 10) in the three-dimensional coordinate system (¶165: joints are expressed in 3d coordinates), wherein the second object is a reference object of the first object (See Fig. 10, the objects are connected and that the second object can be a reference object), and the second location information and the third location information are preset information (See Fig. 10, ¶164-¶174: the second location can be a location 1009 shoulder in Fig. 10 the first location can be a location 1006A head in Fig. 10. The preset information is considered to be the initial position/pose prior to any moving.); obtaining a pose variation of the first object relative to the second object based on the first location information and the second location information (See Fig. 10, ¶164-¶174: the person may be sitting or standing or moving. In doing so there will be a pose variation based on the locations.); transforming the third location information based on the pose variation, to obtain fourth location information of the third object in the three-dimensional coordinate system (See Fig. 10, ¶164-¶174: the third location information may be updated based on the change of sitting/standing/moving to obtain a fourth position); and rendering the third object in the target image based on the fourth location information, to obtain a new target image (See Fig. 10, ¶164-¶174: the third location information may be updated based on the change of sitting/standing/moving to obtain a fourth position, therefore when rendering the third object on the fourth location the update obtains a new target image.). Regarding claim 6, Holzer teaches the method according to claim 5, wherein obtaining the pose variation of the first object relative to the second object comprises: obtaining depth information of the first object (See claim 5 above, with explanation. See ¶165 and ¶166: depth data); obtaining fifth location information of the first object in the three-dimensional coordinate system based on the first location information and the depth information of the first object (See Fig. 10, ¶164-¶174: the fifth location can be updated location of the first object based on the variable positons the person can be in, such as sitting/standing/t); and obtaining the pose variation of the first object relative to the second object based on the second location information and the fifth location information (See Fig. 10, ¶164-¶174 the pose variation can change based on the fact the first and second objects are connected and that the target can move/sit/stand thus producing a fifth location). Regarding claim 7, Holzer teaches the method according to claim 5, wherein obtaining the pose variation of the first object relative to the second object based on the first location information and the second location information comprises: transforming the second location information, to obtain fifth location information of the first object in the three-dimensional coordinate system (See Fig. 10, ¶164-¶174: the transforming may be changing the second location information based upon changing of the user motion such as sit/stand/move. ¶165: joints are expressed in 3d coordinates); and projecting the fifth location information to the target image, to obtain sixth location information (See Fig. 10, ¶164-¶174: the pose variation can change based on the fact the first and second objects are connected and that the target can move/sit/stand thus producing a sixth location that was projecting from the firth location), wherein when a variation between the sixth location information and the first location information meets a preset condition, the pose variation of the first object relative to the second object is a transformation matrix for transforming the second location information (See Fig. 10, ¶164-¶174: the transformation matrix is considered to be any of the calculation adjustments applied towards the objects). Regarding claim 13, Holzer teaches a method of augmented reality, comprising (See abstract: a method, and is describing augmented reality): obtaining a target image and third location information of a third object in a three-dimensional coordinate system (¶165: joints are expressed in 3d coordinates), wherein the target image comprises an image of a first object (See Fig. 10, ¶164-¶174: the third location can be a location 1015 knee in Fig. 10. See claim 1 and claim 5 citation and rejection above); inputting the target image into a first neural network, to obtain a pose variation of the first object relative to a second object, wherein the first neural network is obtained through training based on second location information of the second object in the three-dimensional coordinate system (¶166, ¶251, ¶281: the citations discuss taking the poses which vary and are variable and putting them into . ¶166, “ In another embodiment, a library of 3-D poses can be projected into 2-D. Then, the 2-D projections of the 3-D poses can be compared to a current 2-D pose of the person as determined via the skeleton detection. In one embodiment, the current 2-D pose can be determined via the application of a neural network to a frame of 2-D image data. Next, when a 2-D projection of a 3-D pose is matched with the current 2-D pose determined from the image data 1008, the current 2-D pose can be assumed to have similar attributes as the 3-D pose. In one embodiment, this approach can be used to estimate a depth for each joint or a relative distance between the depths for each joint.” ), the second object is a reference object of the first object, and the second location information and the third location information are preset information (See Fig. 10, ¶164-¶174: the person may be sitting or standing or moving. In doing so there will be a pose variation based on the locations.); transforming the third location information based on the pose variation, to obtain fourth location information of the third object in the three-dimensional coordinate system; and rendering the third object in the target image based on the fourth location information, to obtain a new target image (See Fig. 10, ¶164-¶174: the third location information may be updated based on the change of sitting/standing/moving to obtain a fourth position, therefore when rendering the third object on the fourth location the update obtains a new target image.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J CRADDOCK whose telephone number is (571)270-7502. The examiner can normally be reached Monday - Friday 10:00 AM - 6 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT J CRADDOCK/Primary Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Aug 24, 2023
Application Filed
Oct 13, 2023
Response after Non-Final Action
Sep 19, 2025
Non-Final Rejection — §102, §103
Jan 07, 2026
Response Filed
Mar 20, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597214
SCANNABLE CODES AS LANDMARKS FOR AUGMENTED-REALITY CONTENT
2y 5m to grant Granted Apr 07, 2026
Patent 12597101
IMAGE TRANSMISSION SYSTEM, IMAGE TRANSMISSION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12579767
AUGMENTED-REALITY SYSTEMS AND METHODS FOR GUIDED INSTALLATION OF MEDICAL DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579792
ELECTRONIC DEVICE FOR OBTAINING IMAGE DATA RELATING TO HAND MOTION AND METHOD FOR OPERATING SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12555331
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+14.4%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 616 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month