Prosecution Insights
Last updated: April 19, 2026
Application No. 18/796,192

DRIFT CANCELATION FOR PORTABLE OBJECT DETECTION AND TRACKING

Final Rejection §112§DP
Filed
Aug 06, 2024
Examiner
FRANK, EMILY J
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Sim Ip Hxr LLC
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
437 granted / 632 resolved
+7.1% vs TC avg
Strong +19% interview lift
Without
With
+19.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
31 currently pending
Career history
663
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 632 resolved cases

Office Action

§112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 16, 17 and 19 are objected to because of the following informalities: Claim 16, line 5: “a object” should be changed to –an object--. Claim 17, lines 5-6: “a object” should be changed to –an object--. Claim 19, line 5: “a object” should be changed to –an object--. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1-20 describe “an object” where the location is obtained in relation to a sensor. The specification teaches an object 116 in general in fig. 1 which is described as a surface portion 116 in fig. 8. The specifics of the object described is hands see for example [0046] and figs. 4-10 where the specification describes the object at 114 which is the hands. The specification does not teach an object in general and teaches only the object as the hands. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12,067,157 and over claims 1-17 of U.S. Patent No. 11,537,196 and over claims 4-20 of U.S. Patent No. 11,099,630. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are obvious variations (see italicized portions below for further details). 1. A method including: 12,067,157 discloses at claim 1. A method of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device, the method including: 11,537,196 discloses at claim 1. A method of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device, the method including: 11,099,630 discloses at claim 4. A method of integrating real three-dimensional (3D) space sensing with an augmented reality head mounted device, the method including: obtaining, in a three-dimensional (3D) space, a first position, of an object in a first reference frame based, at least in part, on a location of a sensor; obtaining a first position, at a first time t0, of at least one hand in a first reference frame of a three-dimensional (3D) sensory space; where “at least one hand” reads on an object obtaining a first position, at a first time t0, of at least one hand in a first reference frame of a three-dimensional (3D) sensory space; where “at least one hand” reads on an object at a first time t0, using a sensor attached to the augmented reality head mounted device, sensing a first position of at least one hand in a first reference frame of a three-dimensional (3D) sensory space located in a real environment, including tracking at least portions of the hand for causing interaction between the hand and the augmented reality head mounted device; generating data representing a first virtual representation of the hand at the first position, wherein the first virtual representation is rendered in a virtual environment of the augmented reality head mounted device superimposed on the real environment; where “at least one hand” reads on an object obtaining, in the 3D space, a second position, of the object in a second reference frame, different from the first position, based, at least in part, on a repositioning of the sensor; obtaining, in the 3D sensory space, a second position, at a second time t1, of the hand different from the first position responsive to repositioning of the virtual reality head mounted device and an attached sensor wherein the hand has not moved in the 3D sensory space between t0 and t1; initiating display of a first virtual representation of the hand at the first position, wherein the first virtual representation is rendered in a virtual environment of the augmented reality or virtual reality head mounted device; obtaining, in the 3D sensory space, a second position, at a second time t1, of the hand different from the first position responsive to repositioning of the virtual reality head mounted device and an attached sensor wherein the hand has not moved in the 3D sensory space between t0 and t1; at a second time t1, sensing in the 3D sensory space, a second position of the hand different from the first position responsive to repositioning of the augmented reality head mounted device and the attached sensor due to body movement, wherein the hand has not moved in the 3D sensory space between t0 and t1; and obtaining an actual position of the object by: obtaining an actual second position for the hand, the actual second position generated by: obtaining a second reference frame that accounts for repositioning of the attached sensor; and obtaining an actual second position for the hand, the actual second position generated by: obtaining a second reference frame that accounts for repositioning of the attached sensor; and generating data representing a second virtual representation of the hand at an actual second position by: sensing motion of the attached sensor and calculating a second reference frame that accounts for repositioning of the attached sensor; where the sensing could be a method of obtaining obtaining a transformed first position of the object into a common reference frame and a second transformed position of the object into the common reference frame; and determining the actual position of the object to be transformed second position in the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes; and obtaining transformed first and second positions of the hand into a common reference frame using a transformation that renders the first position in the first reference frame and the second position in the second reference frame into the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes, whereby the sensed second position is transformed to the actual second position; and obtaining transformed first and second positions of the hand into a common reference frame using a transformation that renders the first position in the first reference frame and the second position in the second reference frame into the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes, whereby the sensed second position is transformed to the actual second position; and calculating a transformation that renders the first position in the first reference frame and the second position in the second reference frame into a common reference frame; and transforming the first and second positions of the hand into the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes, whereby the sensed second position is transformed to the actual second position. providing a virtual representation of the object at the actual position to an augmented reality or virtual reality device. providing to another process for display, a virtual representation of the hand at one or more of the first position and the second position, wherein the virtual representation is to be rendered in a virtual environment of the augmented reality or virtual reality head mounted device. initiating display of a second virtual representation of the hand at the actual second position. Where it would have been an obvious variation to “provide for display” and to “initiate display” Where it would have been obvious to display the results 2. The method of claim 1, wherein the common reference frame does not change as the sensor is repositioned. 12,067,157 discloses at claim 2. The method of claim 1, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 11,537,196 discloses at claim 2. The method of claim 1, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 11,099,630 discloses at claim 5. The method of claim 4, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 3. The method of claim 1, wherein the common reference frame is the second reference frame. 4. The method of claim 1, wherein the common reference frame is the first reference frame. 12,067,157 discloses at claim 3. The method of claim 1, wherein the common reference frame is the second reference frame. Where claim 3 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 11,537,196 further discloses at claim 3. The method of claim 1, wherein the common reference frame is the second reference frame. Where claim 3 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 11,099,630 discloses at claim 6. The method of claim 4, wherein the common reference frame is the second reference frame. Where claim 6 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 5. The method of claim 1, wherein the transformed first position and the transformed second position are transformed into the common reference frame is based, at least in part, on an affine transformation. 12,067,157 discloses at claim 4. The method of claim 1, wherein the transformed first and second positions of the hand into a common reference frame further includes applying an affine transformation. 11,537,196 further discloses at claim 4. The method of claim 1, wherein the transformed first and second positions of the hand into a common reference frame further includes applying an affine transformation. 11,099,630 discloses at claim 7. The method of claim 4, wherein the transforming the first and second positions of the hand into the common reference frame further includes applying an affine transformation. 6. The method of claim 1, including determining an orientation of the object at the first position with respect to the first reference frame and causing display of the object. 12,067,157 discloses at claim 5. The method of claim 1, further including determining an orientation of the hand at the first position with respect to the first reference frame and causing the display of the hand accordingly. 11,537,196 further discloses at claim 5. The method of claim 1, further including determining the orientation of the hand at the first position with respect to the first reference frame and causing the display of the hand accordingly. 11,099,630 discloses at claim 8. The method of claim 4, further including determining the orientation of the hand at the first position with respect to the first reference frame and causing interaction between the hand and the augmented reality accordingly. Where it would have been obvious to perform display 7. The method of claim 1, further including determining an orientation of the hand at the second position with respect to the second reference frame and causing the display of the hand accordingly. 12,067,157 discloses at claim 6. The method of claim 1, further including determining an orientation of the hand at the second position with respect to the second reference frame and causing the display of the hand accordingly. 11,537,196 further discloses at claim 6. The method of claim 1, further including determining the orientation of the hand at the second position with respect to the second reference frame and causing the display of the hand accordingly. 11,099,630 discloses at claim 9. The method of claim 4, further including determining the orientation of the hand at the second position with respect to the second reference frame and causing interaction between the hand and the augmented reality accordingly. Where it would have been obvious to perform display 8. The method of claim 1, wherein the first position of the object is obtained based, at least in part, on a determined translation of the object with respect to the common reference frame. 12,067,157 discloses at claim 7. The method of claim 1, wherein determining a position of the hand at the first position further includes calculating a translation of the hand with respect to the common reference frame and causing the display of the hand accordingly. 11,539,196 discloses at claim 7. The method of claim 1, wherein determining the position of the hand at the first position further includes calculating a translation of the hand with respect to the common reference frame and causing the display of the hand accordingly. 11,099,630 discloses at claim 10. The method of claim 4, wherein determining the position of the hand at the first position further includes calculating a translation of the hand with respect to the common reference frame and causing interaction between the hand and the augmented reality accordingly. Where it would have been obvious to perform display 9. The method of claim 1, wherein the second position of the object is obtained based, at least in part, on a determined translation of the object with respect to the common reference frame. 12,067,157 discloses at claim 8. The method of claim 1, wherein determining a position of the hand at the second position further includes calculating a translation of the hand with respect to the common reference frame and causing the display of the hand accordingly. 11,537,196 discloses at claim 8. The method of claim 1, wherein determining the position of the hand at the second position further includes calculating a translation of the hand with respect to the common reference frame and causing the display of the hand accordingly. 11,099,630 discloses at claim 11. The method of claim 4, wherein determining the position of the hand at the second position further includes calculating a translation of the hand with respect to the common reference frame and causing interaction between the hand and the augmented reality accordingly. Where it would have been obvious that the interaction is a display 10. A method including: 12,067,157 discloses at claim 9. A method of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device that renders a virtual background and one or more virtual objects, the method including: 11,537,196 discloses at claim 10. A method of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device that renders a virtual background and one or more virtual objects, the method including: 11,099,630 discloses at claim 14. A method of integrating real three-dimensional (3D) space sensing with a virtual reality head mounted device that renders a virtual background and one or more virtual objects, the method including: obtaining a first position of an object in a first reference frame of a three- dimensional (3D) space; at a first time, obtaining a first position of at least one hand in a first reference frame of a three- dimensional (3D) sensory space; where “at least one hand” reads on an object at a first time, obtaining a first position of at least one hand in a first reference frame of a three-dimensional (3D) sensory space; where “at least one hand” reads on an object at a first time, using a sensor attached to the virtual reality head mounted device, sensing a first position of at least one hand in a first reference frame of a three-dimensional (3D) sensory space, including tracking at least portions of the hand for causing interaction between the hand and the virtual reality head mounted device; where “at least one hand” reads on an object at a second time, at a second time, at a second time, obtaining a second position of the object in a second reference frame that accounts for repositioning of a sensor; and obtaining a second position of the hand; obtaining a second reference frame that accounts for repositioning of an attached sensor; and obtaining a second position of the hand; obtaining a second reference frame that accounts for repositioning of an attached sensor; and sensing a second position of the hand; responsive to repositioning of the head mounted device and the attached sensor due to body movement, sensing motion of the attached sensor and calculating a second reference frame that accounts for repositioning of the attached sensor; and where the sensing could be a method of obtaining obtaining transformed first and second positions of the object into a common reference frame based, at least in part, on one or more transformations that render the first position in the first reference frame and the second position in the second reference frame into the common reference frame. obtaining transformed first and second positions of the hand into a common reference frame using a transformation that renders the first position in the first reference frame and the second position in the second reference frame into a common reference frame. obtaining transformed first and second positions of the hand into a common reference frame using a transformation that renders the first position in the first reference frame and the second position in the second reference frame into a common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes. calculating a transformation that renders the first position in the first reference frame and the second position in the second reference frame into a common reference frame; and transforming the first and second positions of the hand into the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes. Where the calculation could be obtaining 11. The method of claim 10, wherein the common reference frame does not change as the sensor is repositioned. 12,067,157 discloses at claim 10. The method of claim 9, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 11,537,196 discloses at claim 11. The method of claim 10, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 11,099,630 discloses at claim 15. The method of claim 14, wherein the common reference frame is a world reference frame that does not change as the attached sensor is repositioned. 12. The method of claim 10, wherein the common reference frame is the second reference frame. 13. The method of claim 10, wherein the common reference frame is the first reference frame. 12,067,157 discloses at claim 11. The method of claim 9, wherein the common reference frame is the second reference frame. Where claim 11 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 11,537,196 discloses at claim 12. The method of claim 10, wherein the common reference frame is the second reference frame. Where claim 12 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 11,099,630 discloses at claim 16. The method of claim 14, wherein the common reference frame is the second reference frame. Where claim 16 recites the common reference frame is the second reference frame where the claim designates one of the first and second reference frames to be the common reference frame for transformation, and while the patented claim happens to designate the second reference frame as the common reference frame, it would have been obvious to one of ordinary skill in the art that the first reference frame could be designated as the common reference frame similarly. 14. The method of claim 10, wherein the sensor is integrated into an augmented reality or virtual reality head mounted device. 12,067,157 discloses at claim 12. The method of claim 9, wherein the attached sensor is integrated into a unit with the augmented reality or virtual reality head mounted device. 11,537,196 discloses at claim 13. The method of claim 10, wherein the attached sensor is integrated into a unit with the augmented reality or virtual reality head mounted device. 11,099,630 discloses at claim 17. The method of claim 14, wherein the attached sensor is integrated into a unit with the virtual reality head mounted device. 15. The method of claim 10, wherein the first and second positions of the object into the common reference frame based, at least in part, on an affine transformation. 12,067,157 discloses at claim 13. The method of claim 9, wherein the transforming first and second positions of the hand into the common reference frame further includes applying at least one affine transformation. 11,537,196 discloses at claim 14. The method of claim 10, wherein the transforming first and second positions of the hand into the common reference frame further includes applying at least one affine transformation. 11,099,630 discloses at claim 18. The method of claim 14, wherein the transforming the first and second positions of the hand into the common reference frame further includes applying at least one affine transformation. 16. A system including: 12,067,157 discloses at claim 17. A system of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device, the system including: 11/537,196 discloses at claim 16. A system of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device, the system including: 11,099,630 discloses at claim 12. A system of integrating real three-dimensional (3D) space sensing with an augmented reality head mounted device, the system including: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to perform operations including: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to obtaining, in a three-dimensional (3D) space, a first position of a object in a first reference frame based, at least in part, on a location of a sensor; obtaining, in the 3D space, a second position of the object in a second reference frame, different from the first position, based, at least in part, on a repositioning of the sensor; obtaining an actual position of the object by: obtaining a transformed first position of the object into a common reference frame and a transformed second position of the object into the common reference frame; and determining the actual position of the object to be the transformed second position in the common reference frame, wherein the common reference frame has a fixed point of reference and an initial orientation of axes; and providing a virtual representation of the object at the actual position to an augmented reality or virtual reality device. refer to the mapping explanation in the rejection of claim 1 above as compared to patented claim 17 for similar reasoning. refer to the mapping explanation in the rejection of claim 1 above as compared to patented claim 16 for similar reasoning. implement the method of claim 4. 17. One or more non-transitory computer readable media having instructions stored thereon, which instructions when executed by one or more processors, perform a operations comprising: obtaining, in a three-dimensional (3D) space, a first position of a object in a first reference frame based, at least in part, on a location of a sensor; obtaining, in the 3D space, a second position of the object in a second reference frame based, at least in part, on, a repositioning of the sensor; and obtaining an actual position of the object by: obtaining a transformed first position of the object into a common reference frame and a transformed second position of the object into the common reference frame; and determining the actual position of the object to be transformed second position in the common reference frame; and providing a virtual representation of the object at the actual position to an augmented reality or virtual reality device. 12,067,157 discloses at claim 15. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 1. 11,537,196 discloses at claim 9. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 1. 11,099,630 discloses at claim 13. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 4. 18. A system including: 12,067,157 discloses at claim 16. A system of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device that renders a virtual background and one or more virtual objects, the system including: 11,537,196 discloses at claim 17. A system of integrating real three-dimensional (3D) space sensing with an augmented reality or virtual reality head mounted device that renders a virtual background and one or more virtual objects, the system including: 11,099,630 discloses at claim 19. A system of integrating real three-dimensional (3D) space sensing with a virtual reality head mounted device that renders a virtual background and one or more virtual objects, the system including: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to perform operations including: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to: a processor and a computer readable storage medium storing computer instructions configured to cause the processor to obtaining a first position of an object in a first reference frame of a three-dimensional (3D) space; obtaining a second position of the object in a second reference frame that accounts for repositioning of a sensor; and obtaining transformed first and second positions of the object into a common reference frame based, at least in part, on one or more transformations that render the first position in the first reference frame and the second position in the second reference frame into the common reference frame. perform steps of claim 9 above and therefore interpreted and rejected based on similar reasoning (refer to claim 10 above and claim 17 as published) implement the method of claim 14. 19. One or more non-transitory computer readable media having instructions stored thereon, which instructions when executed by one or more processors, perform operations comprising: obtaining a first position of a object in a first reference frame of a three- dimensional (3D) space; obtaining a second position of the object in a second reference frame that accounts for repositioning of a sensor; and obtaining transformed first and second positions of the object into a common reference frame based, at least in part, on one or more transformations that render the first position in the first reference frame and the second position in the second reference frame into the common reference frame. 12,067,157 discloses at claim 17. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 9. 11,537,196 discloses at claim 15. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 10. 11,099,630 discloses at claim 20. One or more non-transitory computer readable media having instructions stored thereon for performing a method of claim 14. 20. The method of claim 10, wherein the common reference frame has a fixed point of reference and an initial orientation of axes. 12,067,157 discloses at claim 18. The method of claim 1, wherein the common reference frame has a fixed point of reference and an initial orientation of axes. 11,537,196 discloses at claim 1… wherein the common reference frame has a fixed point of reference and an initial orientation of axes… 11,099,630 discloses at claim 4 … wherein the common reference frame has a fixed point of reference and an initial orientation of axes… Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY J FRANK whose telephone number is (571)270-7255. The examiner can normally be reached Monday-Thursday 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin C Lee can be reached at (571)272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EJF/ /BENJAMIN C LEE/Supervisory Patent Examiner, Art Unit 2629
Read full office action

Prosecution Timeline

Aug 06, 2024
Application Filed
Jul 30, 2025
Non-Final Rejection — §112, §DP
Oct 31, 2025
Response Filed
Feb 23, 2026
Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585343
METHOD AND SYSTEM FOR DETERMINING STYLUS TILT IN RELATION TO A TOUCH-SENSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12547208
Foldable Display Device and Driving Method Therefor
2y 5m to grant Granted Feb 10, 2026
Patent 12531033
BACKLIGHT MODULE BRIGHTNESS CALIBRATION METHOD, DISPLAY DEVICE THEREOF, AND BRIGHTNESS CALIBRATION DEVICE THEREOF
2y 5m to grant Granted Jan 20, 2026
Patent 12512070
DISPLAY DEVICE MINIMIZING EFFECT OF LIGHT-EMITTING ELEMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12505791
TILED DISPLAY DEVICE UTILIZING A RAIL FRAME AND MAGNETS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
88%
With Interview (+19.2%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 632 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month