Prosecution Insights
Last updated: April 19, 2026
Application No. 18/029,768

POSITION ESTIMATION SYSTEM, POSITION ESTIMATION METHOD, AND COMPUTER PROGRAM

Final Rejection §101§102§103§112
Filed
Mar 31, 2023
Examiner
LEMIEUX, IAN L
Art Unit
2669
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
496 granted / 569 resolved
+25.2% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed 08/18/2025 in response to the Non-Final Office Action mailed 05/16/2025 has been entered. Claims 1-14 are currently pending in U.S. Patent Application No. 18/029,768 and an Office action on the merits follows. Response to Claim Interpretation in view of 35 USC § 112(f) Applicant’s remarks at Section III submit the claims should not be interpreted under the provisions of 35 USC 112(f), without any explanation/reasoning for how sufficient structure, to include algorithmic structure where applicable (particularly any method claims implementing functional claiming directed to an outcome without reciting how the outcome is accomplished), is recited in the claim via the language “an estimation circuit”, as would be understood by POSITA. Applicant’s remarks make no reference to MPEP 2181, any of Prongs A-C of the three-prong analysis therein, and do not address either of the two rebuttable presumptions. The claims as amended feature the use of ‘an estimation circuit’ for accomplishing the functional limitations that are those two ‘estimat[ing] …’ steps, which arguably at best has the effect of replacing one nonce term for another when considered at Prong A. It is well established that method claims can invoke the provisions of 112(f), and claim 9 as an example recites those two ‘estimating’ steps, “using the estimation circuit”. What circuit structure is sufficient for performing the associated functional language considered at Prong B, if it is asserted that “an estimation circuit” is a nonce term/generic alternative to means/step language at Prong A? A ‘circuit’ is e.g. a closed loop path through which electric current can flow, and ‘estimation’ is not a structural modifier for Prong C purposes. Applicant’s Specification does at least distinguish between “processing blocks” and “physical processing circuits” ([0083]), and it may be argued that a ‘circuit’ on its face and as understood by POSITA has more of a physical/hardware connotation as compared to a software processing block/unit/module (even if this fact does little to resolve definiteness concerns for computer implemented methods/processes relying on functional claiming – MPEP 2173.05(g)). Accordingly, because the Examiner understands “circuit” to have a ‘sufficiently definite’ meaning as the name for something physical/structural, and in further view of the evidentiary standard associated with that second rebuttable presumption (Williamson v. Citrix Online, LLC, 792 F.3d 1339, 1349, 115 USPQ2d 1105, 1111 (Fed. Cir. 2015)), the claims as amended are understood to no longer invoke the provisions of 112(f). Claim interpretation follows that guidance as presented in MPEP 2173.01 and MPEP 2111.01 (see flow chart therein), and Examiner notes differences in permissible interpretation are addressed in the claim rejection(s) that follow. Response to 35 USC § 112 Rejections In view of the foregoing amendments to claim 5, rejection(s) under 35 U.S.C. § 112(b) as previously presented is/are overcome/withdrawn. New grounds concerning clarity/precision, in view of the amended language, are presented below. Response to 35 USC § 101 Rejections Applicant's arguments filed 08/18/2025, and concerning eligibility analysis, have been fully considered but they are not persuasive. Applicant’s remarks at page 6 open with reference to the machine-or-transformation test, and with reference to In re Bilski, 545 F.3d 943, 954 (Fed. Cir. 2008). Examiner understands the July 17 2024 PEG, and the adopted Alice/Mayo two-step test in further view of the enumerated Abstract Idea groupings of the 2019 PEG, to be entirely consistent with and to incorporate those considerations drawn from earlier eligibility analysis and Bilski, as identified in MPEP 2106.05(b). At the Supreme Court, Bilski v. Kappos, 561 U.S. 593, 604, 95 USPQ2d 1001, 1007 (2010), page 594, (b) “The machine-or-transformation test is not the sole test for patent eligibility under § 101. The Court’s precedents establish that although that test may be a useful and important clue or investigative tool, it is not the sole test for deciding whether an invention is a patent-eligible “process” under § 101. … Finally, the Federal Circuit incorrectly concluded that this Court has endorsed the machine-or-transformation test as the exclusive test. Recent authorities show that the test was never intended to be exhaustive or exclusive. See, e. g., Parker v. Flook, 437 U. S. 584, 588, n. 9. Pp. 602–604.”. This is clearly reflected in MPEP 2106.05(b), which further describes “It is noted that while the application of a judicial exception by or with a particular machine is an important clue, it is not a stand-alone test for eligibility. Id. … And if a claim fails the Alice/Mayo test (i.e., is directed to an exception at Step 2A and does not amount to significantly more than the exception in Step 2B), then the claim is ineligible even if it passes the M-or-T test. DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1256, 113 USPQ2d 1097, 1104 (Fed. Cir. 2014) ("[I]n Mayo, the Supreme Court emphasized that satisfying the machine-or-transformation test, by itself, is not sufficient to render a claim patent-eligible, as not all transformations or machine implementations infuse an otherwise ineligible claim with an 'inventive concept.'").” As suggested in Applicant’s citation at paragraph 1, page 7 of the remarks, and also at III. of MPEP 2106.05(b), any incorporation/ involvement of a particular machine as defined therein, must not be extra-solution activity or field-of-use (see MPEP 2106.05(g) and (h)). Examiner would assert that even an explicitly involved HMD with a camera imaging a reflector/eye/glasses at the minimum constitutes field of use (MPEP 2106.05(h)) and/or the ‘apply it’ considerations of 2106.05(f), and that this involvement does not outweigh those limitations drawn to the exception. Representative claim 9 in particular favors such a determination – as the claim arguably does not even recite any particular machine, but instead the ‘use’ of a so-called ‘estimation circuit’ to accomplish steps drawn to the exception, and no more. Applicant’s remarks additionally reference the 2019 PEG from January, and a clarifying October Update, regarding considerations at both Prongs 1 and 2 of Step 2A. Said remarks touch briefly on that first of the identified relevant considerations for Step 2A Prong Two, namely as found in MPEP 2106.04(d): • An improvement in the functioning of a computer, or an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a); However remarks do not identify what the improvement is and/or how it is realized by any specifically recited limitations clearly considered as ‘additional elements’ (and not drawn to the exception) when considered at Prong One. Instead they assert that the claims are not ‘directed to’ the exception because there are some ‘additional elements’ broadly, or that the claims are integrated because calculating a relative position is itself useful. The Alice/Mayo two-step test’s roots in preemption concern identifying if the ‘additional elements’ outweigh the exception (which is why no limitations to include those falling on the side of the exception, or ‘additional elements’, are considered in a vacuum at Prong 2 of 2A and 2B). It is not the case that merely having some additional elements, is sufficient to preclude an analysis identifying the claims as being ‘directed to’ the exception. The July 17 2024 PEG is very clear in this regard. Remarks at page 7, last paragraph, suggest that broadly measuring a displacement/ relative position between an imager and a target, from an image of a reflector, has ‘numerous practical uses, such as in the realm of digital marketing and advertising’. Examiner doesn’t disagree that an exception may be found to have utility, but the ‘measuring’ described falls squarely under the math concepts/ operations grouping of the Abstract Idea(s) exception, and an assertion that there are a myriad of Business Methods related uses associated with such a measuring, none recited in the claim (and arguably subject matter for a different field of search, CPC G06Q), only serves to justify concerns regarding preemption. For at least these reasons Examiner finds Applicant’s remarks regarding eligibility analysis non-persuasive. Not only is it the case that the claims as broadly recited (2106.05(b) sub-section I) at best generally link the exception to a field-of-use involving a HMD (2106.05(b) sub-section III), the use of an ‘estimation circuit’ is the only ‘additional element’ outside of that image acquisition (2106.05(g) and (h)), which fails to integrate at Prong Two of 2A in view of MPEP 2106.05(f) and/or (h), and any purported improvement that is e.g. being able to measure a displacement/position, is an improvement to the exception itself (MPEP §§ 2106.04(d)(1) and 2106.05(a) – “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below”). As a final consideration, while Applicant’s remarks, even in view of that mention of the enumerated Abstract Idea groupings set forth in the 2019 PEG, do not appear to argue that the ‘estimating’ steps do not fall under the math concepts/operations grouping at Prong One of 2A (nor has the Examiner relied upon any Tentative Abstract Idea requiring TC Director approval (MPEP 2106.04(a)(3))), the courts have declined to adopt such groupings Rideshare Displays, Inc v. Lyft, Inc., No. 23-2033, (Fed. Cir. September 29, 2025) (https://www.cafc.uscourts.gov/opinions-orders/23-2033.OPINION.9-29-2025_2579953.pdf). Response to Arguments/Remarks Applicant's argument(s) regarding Vlaskamp (US 2022/0391013 A1) as applied have been fully considered but they are not persuasive. For the case of the claims as amended, that second ‘estimating’ is based on ‘a position of the target reflected to the reflector in the image for estimation’, as compared to that ‘image for estimation’ broadly/previously. As characterized by Applicant, Vlaskamp allegedly discloses only a reflection “within the target” and “not within a reflector separate from the target”. This characterization simply points to e.g. a reflector equivalent in Vlaskamp – the user/wearer’s eye, and asserts it is most appropriately drawn to the recited ‘target’ itself – which was not how Vlaskamp was applied because the claim as originally presented required that the target/display 220/etc., be outside of the field of view of the camera/imager (the camera faces the reflector/eyes). Vlaskamp as applied does not rely upon the eye/reflector, as being the target itself – instead the ‘target’ equivalent(s) are anything that may be reflected in the user’s eye/eyewear (eyewear distinct from the HMD itself – see claim 7), e.g. patterns/lighting displayed on 220, lights surrounding the display similarly reflected in the user’s eye(s), etc.. As applied, Vlaskamp reads on determining a second relative position information (target relative to camera – even if indirectly), on the basis of an image of the reflector (e.g. an image of the user’s eyes/eyewear), comprising a position of targets (various lights, displayed patterns, etc.,) reflected therein. Mapping to Vlaskamp as previously presented and maintained in the rejections that follow is not exhaustive. Applicant’s remarks at page 9 section B, assert method and CRM claims 9 and 10 respectively, are patentable for the same/ analogous reasons as those provided for the case of claim 1 in section A. Claims 9 and 10 however have not been amended to reflect that “on the basis of a position of the target reflected to the reflector in the image for estimation and the first relative position”. Instead claims 9 and 10 recite “on the basis of the image for estimation and the first relative position”. Those rejections previously presented remain applicable/non-rebutted accordingly, because claims 9 and 10 do not reflect the amendment argued for the case of claim 1. For the purposes of compact prosecution and a more concise action however, in further view of the manner in which the Examiner maintains that the originally applied grounds reads even for the case of the claim(s) as amended, claims 9 and 10 are rejected in the short-hand. As a final consideration, the claims as amended raise clarity concerns – specifically the question of what component/element does the ‘reflecting to’ if not the reflector itself. See the 112(b) rejection(s) that follow. Claim Objections Claims 1 and 11 are objected to because of the following informalities: Claim 1 typo: a reflector ‘that that’. Claim 11 ‘to correction a distortion’. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-8 and 11-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, line 4, recites the limitation “on the basis of a position of the target reflected to the reflector in the image for estimation and the first relative position”. For the case of the language in question it is not clear what performs the “reflecting to” if not the reflector itself, and the language presented appears to require that the reflector is not what is actually doing the ‘reflecting to’, as it passively receives the ‘position of the target’ reflected to it. Dependent claims 2-8 and 11-14 are similarly rejected as they inherit and fail to cure that deficiency identified above for the case of claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in particular an Abstract Idea falling under the (a) mathematical concepts category (mathematical relationships, formulas or equations, and/or calculations), not ‘integrated into a practical application’ at Prong Two of Step 2A and without ‘significantly more’ at Step 2B. Step 1: The claim(s) in question are directed to primarily a computer implemented method/process for determining a target/display pose (following ‘Yes’ path at Step 1). Corresponding system and non-transitory CRM claim(s) are congruent in scope, and while featuring generic computer hardware considered under the ‘apply it’ considerations of MPEP 2106.05(f), these claims also are understood to be directed to a machine, manufacture and/or composition of matter for the purposes of analysis at Step 1. (Step 1: Yes). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Representative claim(s) 1/9/10 recite(s) – “estimat[ing] a first relative position…” and “estimating a second relative position… on the basis of the first relative position”, drawn to the mathematical concepts Abstract Idea grouping. The first step of “acquiring an image” is an ‘additional element’ that constitutes a data gathering considered in accordance with MPEP 2106.05(g), to be discussed below at Prong Two. Concerning the mathematical concepts Abstract Idea Grouping Applicant may see MPEP 2106.04(a)(2), and subsection (C) Mathematical Calculations more specifically. Claim 5 further recites that integrated relative position calculation similarly drawn to the mathematical concepts Abstract Idea grouping. Reference may also be made to the July 17 2024 PEG identifying various processes steps identified as being drawn to the mathematical concepts Abstract Idea grouping – e.g. Example 47 claim 2 step(s) (b) (at page 7 describing the recited ‘discretizing’ as encompassing a mathematical concept e.g. rounding data values (that may also be performed mentally)) and (c) (interpreted so as to include mathematical calculations such as performing backpropagation and gradient descent algorithm(s)), in addition to Example 48 claim(s) 1 and 2 steps (b) (a ‘converting’ involving a mathematical operation using an STFT), (c) (an ‘embedding’ on the basis of an explicitly recited formula), and (e) (‘applying binary masks’) (see page 23 of the PEG – available https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf ). MPEP 2106.04(a)(2)(C): A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation. (Step 2A, Prong One: Yes). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any ‘additional elements’ recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Examiner notes for consideration at Prong Two of 2A that MPEP 2106.05(a), (b), (c), and (e) generally concern limitations that are indicative of integration, whereas 2106.05(f), (g), and (h) generally concern limitations that are not indicative of integration. As an additional note, ‘additional elements’ are generally limitations excluded from interpretation under the Abstract Idea groupings, and may comprise portions of limitations otherwise identified as falling under those Abstract Idea groupings of the 2019 PEG (e.g. any ‘determination’ that may be made mentally accompanied by the use of a neural network and/or generic computer hardware considered under the ‘apply it’ considerations of 2106.05(f)). Any ‘providing’/outputting broadly, and ‘collection’ of data (i.e. image acquisition(s)), be they images for training any learning model and/or data/images visually observable/ evaluated by a user/operator, also fail(s) to integrate at least in view of MPEP 2106.05(g) (extra-solution data gathering/output) and/or 2106.05(h) as ‘generally linking’ the exception to a field of use involving machine learning and/or imagery so acquired. The same determination holds for dependent claims that serve to limit the collection of data/images (by means of what is collected based on recited conditions (e.g. claim(s) 4, 6-8)) and/or introduce limitations generally linking to a field of use (2-3, 7, etc,). None of the instant claims appear to explicitly/clearly capture/recite any disclosed improvement in technology (see MPEP 2106.05(a)) and any ‘additional elements’, even when considered in combination, fail to integrate at Prong Two of Step 2A accordingly. The claim(s) in question rest with that final position/pose calculation/estimation, which in itself is not/cannot be a ‘practical application’. Integration in view of subsection (a) requires an identification of the manner in which the improvement is achieved, to be explicitly and specifically (not at a high level of generality) recited in the claims, as ‘additional elements’ precluded from interpretation under any of the Abstract Idea groupings (since the improvement cannot be to the exception itself). In view of MPEP 2106.05(f), the improvement cannot be merely/broadly automating what is otherwise the exception, nor can it be e.g. a ‘novel’ pose/position calculation per se. With reference to MPEP 2106.05(a): It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) Even when viewed in combination, the ‘additional elements’ present do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: No), and the claims are directed to the judicial exception. (Revised Step 2A: Yes [Wingdings font/0xE0] Step 2B). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to ‘significantly more’ than the recited exception, i.e., whether any ‘additional element’, or combination of additional elements, adds an inventive concept to the claim. The considerations of Step 2A Prong 2 and Step 2B overlap, but differ in that 2B also requires considering whether the claims feature any “specific limitation(s) other than what is well-understood, routine, conventional activity in the field” (WURC) (MPEP 2106.05(d)). Such a limitation if specifically recited however, must still be excluded from interpretation under any of the Abstract Idea groupings. Step 2B further requires a re-evaluation of any additional elements drawn to extra-solution activity in Step 2A (e.g. gathering imagery) – however no limitations appear directed to any novel collection per se (HMD regularly acquire images of the eyes for gaze tracking/display modification purposes). Limitations not indicative of an inventive concept/ ‘significantly more’ include those that are not specifically recited (instead recited at a high level of generality), those that are established as WURC, and/or those that are not ‘additional elements’ by nature of their analysis at Prong One (i.e. reciting the exception). Reference may also be made to the 2024 PEG describing that an improvement/ inventive concept (for ‘significantly more’ determination(s)) cannot be to the judicial exception itself. The claim(s) in question recite little beyond those limitations recited at a high level of generality and falling under the mathematical concepts Abstract Idea grouping – as the claim(s) rest with a target/display/HMD pose determination/calculation itself, and would monopolize the exception accordingly. (Step 2B: No). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 1. Claims 1 and 8-12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Vlaskamp (US 2022/0391013 A1). As to claim 1, Vlaskamp discloses a position estimation system (Fig. 6, Fig. 7A, Fig. 18, etc.,) comprising: an imager (camera 324/402, etc.,); a reflector (eye(s) of the user) that see below, camera 324/402 is oriented towards the user/wearer’s eyes; cameras 324/462 imaging the eyes 410/500/610 reflecting glint(s) in addition to content/patterns rendered on corresponding left and right eye displays, [0009] “an imaging system configured to capture images of eyes of the user”, etc.,); an estimation circuit ([0177] “The depicted system 200 may also comprise a head pose processor 336, such as an ASIC (application specific integrated circuit)”); at least one memory that is configured to store instructions ([0173], [0324], [0328], etc.,); and at least one processor that is configured to execute the instructions (Fig. 6 612. Fig. 7A, [0006-0009] “The augmented reality system comprises a head-mounted display configured to present virtual content by outputting light to a user, an imaging device configured to capture images of eyes of the user, and at least one processor communicatively coupled to the head-mounted display and the imaging device”, etc.,) to acquire an image for estimation including a target that is disposed out of an imaging range of the imager, by imaging the reflector that reflects a light and that is disposed in the imaging range of the imager (camera 324/402 FoV oriented towards the eye and not the target/display 220, [0006-0009] “an imaging system configured to capture images of eyes of the user”, [0181] “engine 334, may be coupled to the eye cameras 324 via communication link 274, and be coupled to a projecting subsystem 318 (which may project light into user's eyes 302, 304 via a scanned laser arrangement in a manner similar to a retinal scanning display) via the communication link 272. The rendering engine 334 may also be in communication with other processing units such as, e.g., the sensor pose processor 332 and the image pose processor 336 via links 276 and 294 respectively”, [0216] “As shown in FIG. 6, head-mounted display system 600 may include an eye tracking system including a camera 324 that captures images of a user's eye 610. If desired, the eye tracking system may also include light sources 326a and 326b (such as light emitting diodes "LED"s). The light sources 326a and 326b may generate glints (i.e., reflections off of the user's eyes that appear in images of the eye captured by camera 324). The positions of the light sources 326a and 326b relative to the camera 324 may be known and, as a consequence, the positions of the glints within images captured by camera 324 may be used in tracking the user's eyes”, [0223] “Registration observer 620 may use information from eye tracking module 614 to identify whether the head-mounted unit 602 is properly positioned on a user's head. As an example, the eye tracking module 614 may provide eye location information, such as the positions of the centers of rotation of the user's eyes, indicative of the three-dimensional position of the user's eyes relative to camera 324 and head-mounted unit 602 and the eye tracking module 614 may use the location information to determine if display 220 is properly aligned in the user's field of view, or if the head-mounted unit 602 (or headset) has slipped or is otherwise misaligned with the user's eyes”; as an interpretation note the ‘reflecting unit’ is not limited solely to but instead may comprise/include user eye(s) in addition to one or more portions of the HMD/wearable device); estimate using the estimation circuit a first relative position that is a position of the reflector with respect to the imager, on the basis of the image for estimation (eye/‘reflecting unit’/reflector position/pose/axis information as determined by interocular axis estimation module 740, Fig. 18 1806, [0009] “The at least one processor is configured to determine an interocular axis of the user that extends between the user's left and right eyes based at least in part on one or more images captured by the imaging system”, [0182], [0213] “As the eye 500 moves to look toward different objects, the eye pose will change relative to the natural resting direction 520. The current eye pose may be determined with reference to an eye pose direction 524, which is a direction orthogonal to the surface of the eye (and centered in within the pupil 516) but oriented toward the object at which the eye is currently directed”, [0228], [0229-0231], [0239], etc.,); and estimate using the estimation circuit a second relative position that is a position of the target (HMD as a whole and/or display portions/targets thereof to which one or more cameras 324/462 imaging the eyes 410/500/610 are fixed with a known/predetermined position relationship between certain target embodiments – see Applicant’s Specification at 0071-0072 wherein the ‘target’ is a display) with respect to the imager, on the basis of a position of the target reflected to the reflector in the image for estimation and the first relative position ([0009] “determine an orientation of the HMD relative to the interocular axis of the user; and provide the user with feedback based on the determined orientation of the HMD relative to the interocular axis of the user”, [0223] “In general, registration observer 620 may be able to determine if head-mounted unit 602, in general, and displays 220, in particular, are properly positioned in front of the user's eyes. In other words, the registration observer 620 may determine if a left-eye display in display system 220 is appropriately aligned with the user's left eye and a right-eye display in display system 220 is appropriately aligned with the user's right eye. The registration observer 620 may determine if the head-mounted unit 602 is properly positioned by determining if the head-mounted unit 602 is positioned and oriented within a desired range of positions and/or orientations relative to the user's eyes”, [0288] “As an example, the registration observer 620 may use an inward-facing imaging system 462, which may include an eye tracking system, to determine how relevant parts of the wearable system 200 are spatially oriented with respect to the user and, in particular, the user's eyes, ears, mouth, or other parts that interface with the wearable system 200”, etc.,). As to claim 8, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system wherein the reflector is an eyeball (eyes 410/500/610 as imaged by cameras 324/462, and reflecting glint(s) in addition to reflecting content/patterns rendered on corresponding left and right eye displays, [0009] “an imaging system configured to capture images of eyes of the user”, etc.,). As to claim 9, this claim is the method claim corresponding to the system of claim 1 and is rejected accordingly. See the remarks above as claims 9 and 10 have not actually been amended to match the language of claim 1, despite remarks suggesting otherwise. As to claim 10, this claim is the non-transitory CRM claim corresponding to the system of claim 1 (see remarks) and is rejected accordingly. As to claim 11, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system wherein the at least one processor is configured to execute the instructions to correct[0227] “Image preprocessing module 710 may receive images from an eye camera such as eye camera 324 and may perform one or more preprocessing (i.e., conditioning) operations on the received images. As examples, image preprocessing module 710 may apply a Gaussian blur to the images, may down sample the images to a lower resolution, may applying an unsharp mask, may apply an edge sharpening algorithm, or may apply other suitable filters that assist with the later detection, localization, and labelling of glints, a pupil, or other features in the images from eye camera 324. The image preprocessing module 710 may apply a low-pass filter or a morphological filter such as an open filter, which may remove high-frequency noise such as from the pupillary boundary 516a (see FIG. 5), thereby removing noise that may hinder pupil and glint determination. The image preprocessing module 710 may output preprocessed images to the pupil identification module 712 and to the glint detection and labeling module 714”). As to claim 12, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system wherein the first relative position is estimated based on a size of the reflector in the image for estimation (Fig. 7A 614 utilizing ‘available data’ comprising 702, 704 and 706, 704 including assumed eye dimensions, [0226] “As examples, eye tracking module 614 may utilize available data including eye tracking extrinsics and intrinsics, such as the geometric arrangements of the eye-tracking camera 324 relative to the light sources 326 and the headmounted-unit 602; assumed eye dimensions 704 such as a typical distance of approximately 4.7 mm between a user's center of cornea curvature and the average center of rotation of the user's eye or typical distances between a user's center of rotation and center of perspective; and per-user calibration data 706 such as a particular user's interpupillary distance. Additional examples of extrinsics, intrinsics, and other information that may be employed by the eye tracking module 614 are described in U.S. patent application Ser. No. 15/497,726, filed Apr. 26, 2017 (Attorney Docket No. MLEAP.023A 7), which is incorporated by reference herein in its entirety”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 4 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Vlaskamp (US 2022/0391013 A1) in view of Miettinen al. (US 2019/0138094 A1) and Greenwald (US 9,898,082 B1). As to claim 4, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system [0181], [0297], [0299], [0302], [0303] “As shown in FIG. 16, the HMD may provide different respective alignment markers to the user's left and right eyes in order to demonstrate any left-right vertical misalignment. For example, the HMD may display screen 1600a to a user's left eye and may display screen 1600b to the user's right eye. Screen 1600a may include a left-eye horizontal alignment marker 1502 and a vertical alignment marker 1506, while screen 1600b may include a right-eye horizontal alignment marker 1504 and a vertical alignment marker 1506”). Vlaskamp further discloses a condition in which drawing patterns are displayed on 220 to facilitate a correction/calibration of misaligned HMD/display(s) ([0297] “It will be appreciated that the horizontal portions of the alignment markers are may take the form of other mirror image shapes or arrangements of lines which do not completely overlap when vertically misaligned”, [0299] “the alignment markers may take any suitable shape or form. Using the alignment markers, a user may be able to quickly perceive if the HMD is tilted on their head, including in which direction and by how much. Thus, the user can correct the tilt of the HMD until the HMD is properly leveled”, [0302] “the alignment markers make take the form of the letter "T" laying sideways. It will be appreciated, however, that the alignment markers may take other shapes or forms”, etc.) however these are understood to be patterns that are not necessarily relied upon for determining those eye pose related parameters upon which HMD/display pose determination is based prior to providing the user feedback/alignment markers via display 220. Stated differently, while a disclosed condition, it is not a condition understood to exist at the time of capture for that/those ‘images for estimation’. Miettinen however evidences the obvious nature of gaze tracking (Fig. 5) wherein the image for estimation is an image that is captured in a condition in which a pattern that varies depending on a display position is displayed (see Fig. 3F, and Fig. 4, wherein 3F varies depending on display position in view of those variously oriented “V” and the dotted line therebetween). Greenwald further evidences the obvious nature of gaze tracking wherein the image for estimation is an image that is captured in a condition in which a pattern that varies depending on a display position is displayed on a target/display (Abs “A gaze tracking system uses visible light to track the pose of an eye. A display screen displays a display image, while a camera captures images of the eye . The images captured by the camera include a reflection of the display screen, reflecting from the cornea of the eye. The position of the reflection depends on the orientation of the eye”, Fig. 10A, camera 1010 capturing eye 900 and display image of 1000 reflected therein (camera image 1020), Fig. 11, Fig. 14 1406 on the basis of 1402, col 2 lines 1-10, etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Vlaskamp such that the image for estimation is an image that is captured in a condition in which a drawing pattern that varies depending on a display position is displayed on the target/display as taught/suggested by Miettinen and/or Greenwald, the motivation as taught/suggested in Miettinen that such a varied pattern may allow for better detection distinguished from visual artifacts, and also/alternatively as suggested in Greenwald that such a varied pattern/displayed image may be used to induce a particular/desired gaze/eye pose (or sequence of poses) that may in response thereto be more efficiently detected/validated. As to claim 13, Vlaskamp in view of Miettinen and Greenwald teaches/suggests the system of claim 4. Vlaskamp in view of Miettinen and Greenwald further teaches/suggests the system wherein the second relative position is estimated based on the first relative position (see Vlaskamp disclosure as identified above for the case of claim 1) and a size of the drawing pattern on the target and a size of the drawing pattern in the image for estimation (Vlaskamp 702, 704 and 706 in further view of Greenwald Fig. 11, col 10 lines 10-20, 20-30 “When considered from the camera's perspective, as the eye moves, both the position of the Purkinje image of the display screen, and the portion of the reflection that is visible, may change. If the eye is near the same position, many of the same features may be visible, but slightly offset and distorted; whereas if the eye is in a very different position, very few features may match – since they may not be visible or distorted beyond recognition … the position, orientation, and shape of reflection 1012 of the displayed star pattern, as it appears in camera image 1020, depends on the gaze direction of eye 900 when camera image 1020 is captured”, col 11, col 19 mapping P accounting for scale, etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to further modify the system and method of Vlaskamp in view of Greenwald such that the first and accordingly second relative position are determined based at least in part on a displayed size of a pattern (e.g. star pattern of display image(s) of Greenwald Fig. 11 left column) and a size of the pattern in the camera image (Fig. 11 right column) as taught/suggested by Greenwald in view of e.g. a projective mapping therebetween accounting for scale, the motivation as similarly taught/suggested therein (Greenwald col 10) that such a size correspondence is but one of a number of different features that may serve to indicate a corresponding gaze direction characterized by a reasonable expectation of success. As to claim 14, Vlaskamp in view of Miettinen and Greenwald teaches/suggests the system of claim 4. Vlaskamp in view of Miettinen and Greenwald further teaches/suggests the system wherein the at least one processor is configured to execute the instructions to estimate a position of the drawing pattern in the reflecting unit based on the drawing pattern on the target and the drawing pattern in the image for estimation that is captured in a horizontally inverted condition (Vlaskamp e.g. Fig. 8E wherein patterns/glints of Vlaskamp as modified by Miettinen and Greenwald are horizontally inverted as the image patterns as captured from the eyes are reflections/mirrored, and the configuration thereof (in terms of relative position(s) while mirrored) matches that of the source(s) (be they light sources separate from the display and/or part thereof as is the case for the proposed modification) – see Greenwald Fig. 11, and col 10 identified above for the case of claim 13 “the position, orientation and shape of the Purkinje image changes as the orientation of the eye changes”). 2. Claims 2 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Vlaskamp (US 2022/0391013 A1). As to claim 2, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system wherein a marker for detecting the reflector from the image for estimation is reflected in the reflector and attached to a reflecting unit (Figs. 1 and 3, IR sources 326 are attached to frame 230, and wherein IR glints serve as markers for detecting eye/reflector pose information, [0177] “With continued reference to FIG. 3, a pair of scanned-laser shaped-wavefront (e.g., for depth) light projector modules with display mirrors and optics configured to project light 338 into the eyes 302, 304 are shown. The depicted view also shows two miniature infrared cameras 324 paired with infrared light sources 326 (such as light emitting diodes "LED"s), which are configured to be able to track the eyes 302, 304 of the user to support rendering and user input. The cameras 324 may be part of the inward-facing imaging system 462 shown in FIG. 4.”, [0216] “In yet other embodiments, there may be one or more cameras 324 and one or more light sources 326 associated with one or each of a user's eyes 610. As a specific example, there may be two light sources 326a and 326b and one or more cameras 324 associated with each of a user's eyes 610. As another example, there may be three or more light sources such as light sources 326a and 326b and one or more cameras 324 associated with each of a user's eyes 610”, [0230-0231], etc.). Vlaskamp fails to disclose markers and/or light sources projecting marker elements, as being attached to the user’s eyeballs themselves and/or e.g. eyewear worn beneath the HMD (see also interpretation notes at page 14 of the Non-Final Office Action). Vlaskamp does however evidence the obvious nature of using markers/marker equivalent(s) to facilitate object detection and subsequent pose determinations based thereon. Official Notice is also taken to the manner in which markers are routinely affixed to objects so as to facilitate detection, alignment, etc., (even and especially for the case that the objects are transparent/semi-transparent – e.g. tissue/sample slides in the context of e.g. spatial transcriptomics, microscopy, etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Vlaskamp such that for the case that the ‘reflector’ comprises any surface other than solely a user’s biological eyes (see e.g. claim 7), that a marker/fiducial equivalent is attached thereto, similar and as an alternative to e.g. projecting such a marker element, the motivation being as known to POSITA that such a marker element so attached would facilitate detection/localization for any one of the associated components, while omitting a need for otherwise projecting such a marker, in a manner characterized by a reasonable expectation of success. As to claim 5, Vlaskamp discloses the system of claim 1. Vlaskamp further discloses the system wherein the imager is configured to acquire a plurality of images for estimation (Fig. 11 1110, etc.,); the estimation circuit is used to estimate a second relative position, for each image of the plurality of images, that is a position of the target with respect to the imager (see Vlaskamp as applied for the case of claim 1 above, similarly applicable to instances wherein a plurality of images are obtained (more than one eye, more than one iteration, etc.,)); and the at least one processor is configured to execute the instructions to integrate a plurality of the first relative positions estimated from the plurality of images for estimation to calculate an integrated relative position ([0237] “As an example, the CoR estimation module 724 may estimate the CoR by finding the average point of intersection of optical axes determined for various different eye poses over time. As additional examples, module 724 may filter or average estimated CoR positions over time, may calculate a moving average of estimated CoR positions over time, and/or may apply a Kalman filter and known dynamics of the eyes and eye tracking system to estimate the CoR positions over time”, [0238] “Module 726 may employ various techniques, such as those discussed in connection with CoR estimation module 724, to increase the accuracy of the estimated IPD. As examples, IPD estimation module 724 may apply filtering, averaging over time, weighted averaging including assumed IPD distances, Kalman filters, etc. as part of estimating a user's IPD in an accurate manner”). Vlaskamp fails to explicitly disclose that a plurality of second/final/HMD/display poses are integrated, but does disclose such an integration for a plurality of first/eye poses estimated from a plurality of the images for estimation so as to calculate an integrated relative position. The teaching of Vlaskamp for the case of that first relative position may be readily extended to that of the second, for the same purposes, of which are known/readily recognized by PHOSITA. Namely, using a previous HMD/display pose determination as a candidate pose for refinement by implementing e.g. commonly relied upon moving average filter, a Kalman filter, etc.. Such an integrated relative position may serve to eliminate anomalies/outlier pose determinations arising from noisy sensor data, periodic motions (user breathing), etc.. It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Vlaskamp such that e.g. a moving average filtering, Kalman filtering, etc., as implemented for those first/eye poses are similarly implemented for the case of a second/HMD/display pose deduced therefrom, the motivation(s) being that/those identified above, that such a filtering/integrated position determination may serve to minimize the impact of anomalous sensor data. As to claim 6, Vlaskamp teaches/suggests the system of claim 5. Vlaskamp further discloses the system wherein the plurality of the images for estimation are images that are captured in a condition in which the first relative position varies (images are captured under a condition in which the eye pose(s) vary(ies), [0237] “In at least some embodiments, the CoR estimation module 724 may refine its estimate of the center of rotation of each of the user's eyes over time. As an example, as time passes, the user will eventually rotate their eyes (to look somewhere else, at something closer, further, or sometime left, right, up, or down) causing a shift in the optical axis of each of their eyes. CoR estimation module 724 may then analyze two (or more) optical axes identified by module 722 and locate the 3D point of intersection of those optical axes. The CoR estimation module 724 may then determine the center of rotation lies at that 3D point of intersection. Such a technique may provide for an estimate of the center of rotation, with an accuracy that improves over time”, etc.,). 3. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Vlaskamp (US 2022/0391013 A1) in view of Miettinen al. (US 2019/0138094 A1). As to claim 3, Vlaskamp teaches/suggests the system of claim 2. Vlaskamp further discloses the system wherein the marker is in a predetermined shape or in a predetermined color (glints have a predetermined shape as detected by module(s) 712/714, both individually and in conjunction for the case(s) of a plurality of light sources 326a-d, and glints have a false but predetermined color (for the case of IR source(s), [0230-0231] “Glint detection module 714 may use this data to detect and/or identify glints (i.e., reflections off of the user's eye of the light from light sources 326) within regions of the preprocessed images that show the user's pupil. As an example, the gli
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
May 13, 2025
Non-Final Rejection — §101, §102, §103
Aug 08, 2025
Applicant Interview (Telephonic)
Aug 08, 2025
Examiner Interview Summary
Aug 18, 2025
Response Filed
Nov 05, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602825
Human body positioning method based on multi-perspectives and lighting system
2y 5m to grant Granted Apr 14, 2026
Patent 12592086
POSE DETERMINING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586397
METHOD AND APPARATUS EMPLOYING FONT SIZE DETERMINATION FOR RESOLUTION-INDEPENDENT RENDERED TEXT FOR ELECTRONIC DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579840
BEHAVIOR ESTIMATION DEVICE, BEHAVIOR ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573086
CONTROL METHOD, RECORDING MEDIUM, METHOD FOR MANUFACTURING PRODUCT, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.6%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month