Prosecution Insights
Last updated: April 19, 2026
Application No. 18/492,963

Screen Position Estimation

Final Rejection §101§103§112
Filed
Oct 24, 2023
Examiner
LEMIEUX, IAN L
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Zspace Inc.
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
496 granted / 569 resolved
+25.2% vs TC avg
Moderate +10% lift
Without
With
+9.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
34 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
39.6%
-0.4% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed 11/25/2025 in response to the Non-Final Office Action mailed 10/01/2025 has been entered. Claims 1-20 remain pending in U.S. Patent Application No. 18/492,963 and an Office action on the merits follows. Specification In view of the foregoing amendments to Applicant’s Specification correcting display 185 to read display 150, objection(s) previously set forth are withdrawn. Response to 35 USC § 112 Rejections In view of the foregoing amendments claim rejections under 35 U.S.C. § 112(b) are resolved/withdrawn only in part (claim 7 remains indefinite). Claim 1 as amended serves to clarify the relationship between the display and base comprising chassis 110B – namely that they are coupled via a hinge type mechanism, in addition to the manner in which that camera of the portable imaging device as now recited is distinct from (but communicatively coupled to) the portable computer system comprising the base, display and hinge type mechanism. Examiner notes that Applicant’s disclosure features no such language (“hinge type mechanism”) recited explicitly, however would agree support exists in Fig. 1B – illustrating an exemplary laptop configuration common in the art. Claim 1 as amended additionally requires that the ‘portable imaging device’ (corresponding but not limited to HMD 140) is now “physically positioned adjacent to the base”. This language is understood however to at most require that the portable imaging device is in a physical proximity to the ‘portable computer system’ enabling capture of the one or more images received. Claim 1 as amended sufficiently resolves those ambiguities previously identified. Additionally, claim 12 as amended cures that deficiency regarding narrower and broader ranges in the same claim, as described in MPEP 2173.05(c) sub-section I, and claim rejection(s) on that grounds are withdrawn accordingly. Regarding claim 7 however, Applicant’s remarks assert that the Specification at [0112-0114] describes fuzzy-logic based image processing that thereby sets forth a definition for a “fuzzy image”. Examiner agrees that fuzzy-logic based processing is disclosed (in e.g. [0155] of the PGPUB – not at [0112-0114] as those paragraphs in both the PGPUB and Specification as filed do not appear to feature the term ‘fuzzy’ at all), but disagrees that this disclosure serves to define what constitutes a ‘fuzzy image’. As was previously identified at page 8 of the Non-Final Office Action, while the term fuzzy may serve to modify/characterize a matching algorithm as one that implements fuzzy logic in a matching process, this does not serve to define what a fuzzy image itself is. Applicant’s remarks appear as an attempt to define the term as “a data representation obtained by” such processing – however even this is not disclosed, would fail to resolve what characteristics the data representation necessarily has – i.e., how multiple class memberships may be reflected/communicated in the image, and raises additional clarity/precision issues because the claim requires the ‘received’ images to be fuzzy images. The term in question appears at [0160], [0165], and [0171] of the PGPUB, and simply features language disclosing that the broader/genus ‘images’ “may include and/or be one or more fuzzy images”. At best this disclosure serves to define a ‘fuzzy image’, as an image, but tell(s) little otherwise. [0155] does not disclose a ‘fuzzy image’ to be one resultant from the disclosed processing – “For example, Fuzzy image-matching algorithms may be considered as a class of algorithms operating on digitized images and calculating a numeric value reflecting the similarity between two given input images. One common approach is building so-called feature vectors for each given image and then performing KNN (K-Nearest-Neighbor) searches over pre-calculated feature vectors”. Examiner would agree that POSITA might understand/assume the term ‘fuzzy’ to require class memberships that are not ‘crisp’ – e.g. concerning multiple/different class memberships simultaneously, however despite such an understanding it is still unclear how such memberships are reflected/represented in a ‘fuzzy image’. Applicant may see the flow chart at MPEP 2111.01 and concerning plain meaning interpretation of limitations that do not have an ‘ordinary and customary’ meaning in the art, and wherein the Specification does not provide a meaning for the term. An internet-based search of the term in question suggests that it does not have any ‘ordinary and customary’ meaning in the art (nor has the Examiner previously encountered the term in question). As mentioned above, it is additionally noted that claim 7 depends directly on claim 1 and modifies those images received. In order for the definition as supplied in Applicant’s remarks to apply, the received images must have undergone some form of preprocessing, by means of a fuzzy algorithm, that is distinct from any used in the ‘comparing’ step that occurs subsequent to receipt. Such pre-processing does not appear supported/disclosed and Applicant’s supplied definition does not appear accurate/applicable accordingly. Even if applicable, ‘a data representation obtained by’ e.g. a fuzzy algorithm fails to clearly/precisely convey how multiple/different memberships are necessarily reflected/represented. Claim 7 remains indefinite accordingly. Response to 35 USC § 101 Rejections Applicant’s remarks regarding Subject Matter Eligibility Analysis have been considered but determined non-persuasive. Applicant’s remarks fail to present distinct Prong One and Prong Two analysis, and fail to identify which ‘additional elements’ excluded from being drawn under the exception, serve for integration at Prong Two of Step 2A and/or how they serve in the same. Remarks at page 7-8 assert the claimed invention achieves a tangible improvement in the operation of a real-world device, however fail to identify how the purported improvement is realized by limitations that do not themselves fall within the exception (instead Applicant’s remarks identify the improvement as being that very same image based display angle determination/calculation drawn to the exception – as framed the improvement to the system at large is that it performs the exception). Applicant argues that the recited limitations (for the claim as a whole) are not mental steps and/or mathematical abstractions per se because as recited in the claim they are accomplished by e.g. generic computer hardware. The 2024 PEG makes clear that this is not how a Prong One analysis is conducted, and that recitation of e.g. a generic computer/processor does not serve to prevent either of those ‘comparing…’ and/or ‘determining… a display angle relative to a base’ limitations from being drawn to the mental processes and/or mathematical operations Abstract Idea groupings. Remarks at page 8 assert that the improvement is that the display-orientation assessment using a camera-based sensor need not rely on dedicated hinge sensors or inertial modules. No recited claim elements serve to characterize that “portable computer system” as being absent these elements however. Examiner also finds it unlikely that a distinct/separate imaging device and associated processing is more efficient than and/or actually intended to replace those often unnoticed and commonly implemented sensor elements relying on the Hall effect to ascertain a display/lid angle. The claims do not pertain to such a scenario accordingly, and instead are linked to a far broader array of situations wherein e.g. an HMD (or even a user’s smartphone – a person taking pictures of their laptop would read) may be used in conjunction with one or more ‘portable computer systems’. The claims as recited (and fairly/permissibly interpreted under BRI – see MPEP 2173.01 and 2111) very much concern the issue of preemption that the Alice-Mayo framework intended to address. Ascertaining a display orientation in the context of AR/VR may be commonly performed so as to display virtual content in conjunction with one or more physical displays – see e.g. Terre et al. (US 2022/0229534 A1) (e.g. snap/anchor content to physical locations, display complementary information adjacent to/in conjunction with non-virtual displayed content, etc.) – such an improvement would be to the HMD, and not motivated by any desire to eliminate e.g. magnetic sensors in the housing of the laptop. Regardless, the alleged improvement is subsumed within the exception, and representative claim 1 rests with determining/calculating a display angle relative to a base (the determined angle is not recited as being used in any subsequent process steps by either of the portable imaging device or the portable computer system). As is noted in the MPEP and 2024 PEG, the improvement cannot be to the exception, but must be realized by ‘additional elements’ distinct from the exception. Applicant argues the identified improvement falls ‘squarely within the type recognized as eligible’ in the 2024 PEG (identifying no specific Example claim), however Examiner disagrees. Analysis for Example claims in the 2024 PEG, found eligible, explicitly identified ‘additional elements’ that served to realize the various improvements, and in no instance did the exception itself, or limitations not identified as ‘additional elements’, serve as the improvement. Applicant also argues that the claims are analogous to those of Thales Visionix, Inc v. United States (Fed Cir. 2017) and that eligibility analysis should follow suit. Examiner disagrees as Thales concerned a ‘particular configuration’ of inertial sensors and the structure recited in representative claim 1 fails to serve for integration at Prong Two of Step 2A because it at best generally links to a field of use (MPEP 2106.05(h)) wherein one of the devices may be e.g. a laptop/portable system comprising a display and hinge type mechanism, and the other may be any other ‘portable imaging device’. Lastly, Applicant’s remarks at page 8, suggest that an evidentiary burden has not been met by the Examiner, in drawing a conclusion that one or more ‘additional elements’ are ‘WURC’ (MPEP 2106.05(d)). The previously presented analysis for Step 2B (WURC is not a 2A consideration) however does/did not rely on any assertion that the ‘additional elements’ recited constitute only that which is WURC. Instead it is asserted that the ‘additional elements’ that are e.g. a camera, portable computer system, etc., are generically recited structure that does not provide ‘significantly more’ in any 2B analysis in view of MPEP 2106.05(f) and/or (h), given the overlap between Prong Two of 2A and Step 2B. It is very likely the case that the structure recited, in view of Applicant’s supporting figures, is only that which is well-understood, routine and/or conventional - however this is not the exclusive reason identified for why that structure fails to serve as ‘significantly more’ at Step 2B. Corresponding rejections to the claims are maintained accordingly. Response to Arguments/Remarks re. Prior Art based rejections Applicant’s arguments with respect to claim(s) as amended have been considered and determined persuasive in part but rendered moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument(s). Examiner agrees that Ma et al. (US 2023/0098910 A1) fails to disclose any embodiment wherein the smartphone device 40/50 is alternatively a laptop, and for Ma as previously applied the base equivalent is not commonly housed with the display of 40/50 and/or coupled to said display via a hinge type mechanism as is now required by the claims as amended. Whether or not it would have been obvious to modify Ma in that respect (replacing smartphone device 40/50 with e.g. a laptop) has not been considered since Updated Search and consideration identifies Srinivasan et al. (US 2020/0241826 A1) which discloses the amended claim elements to include portable imaging device 240 physically positioned adjacent to the base (see Figs. 8 and 12 – physically adjacent so as to enable the capture of image(s) 244 featuring one or more displays of 100 and base/common housing/chassis comprising hinge 204), communicatively coupled to portable computer system 100 (via 246 and network interface 180, and the same of 240, [0026]), and capturing image(s) 244 for comparison with reference image(s) 252 (e.g. [0027]) so as to determine one or more relative angles between display(s) and a base equivalent (e.g. Figs. 6, 15 and 17). PNG media_image1.png 668 1156 media_image1.png Greyscale While potentially rendered moot in view of Srinivasan, for clarification Applicant’s remarks at page 8 assert the claim requires elements that are not actually/explicitly recited. Although the claims are interpreted in light of the specification and disclosed even if non-limiting embodiments, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). At page 8 Applicant describes the method as “requir[ing] images of the actual display and base”. This is not accurate, as the claim requires the images to simply feature “a portion” of the “portable computer system”. Claim 18 also requires the images comprise “areas of interest” that may further (claim 19) comprise “distinctive markings or patterns” (a plain meaning interpretation includes any distinctive marking for any portion of said system). Claim 14 requires that the one or more images include an oblique view of “at least a portion” of the system. Only for claim 15 does the image illustrate/represent/comprise a portion of the display (and not explicitly the base – which is part of the motivation for previously applying Ma (wherein the base was drawn to that belonging to the HMD), there is a bulk of potential prior art that will involve imaging displays belonging to a broad array of “portable computer system” embodiments (with disclosures written like Srinivasan [0016])). As is illustrated above for Srinivasan Fig. 8 (and 12), images 244 acquired by vision system 240 include oblique views of 133 and 134 (a plane in which 133/134 predominately exists is not parallel to a plane in which the image exists, as captured from the viewpoint(s) of 240). Bleyer as previously applied remains applicable and Applicant’s remarks fail to persuasively establish why/how e.g. Figs. 16A and C (as distinguished from 16B) of Bleyer, are not oblique view equivalents. Regarding remarks directed to the combination of references previously proposed in the rejection of claim 11, these are rendered moot in view of Srinivasan et al. (US 2020/0241826 A1) as applied, particularly in view of angular offset Φ that is disclosed in the context of a calibration. Furthermore, compiling a set of images ‘via a calibration algorithm’ broadly – places no constraints whatsoever on what is being calibrated, what the steps of such an algorithm are, or what the end effect of an associated calibration is. Nor does e.g. claim 10 limit claim 1 to the context of any calibration. Examiner maintains that the teaching of Bleyer that is relying on multiple acquisition perspectives/views at one or more fixed angular intervals (analogous to Fig. 12 of Srinivasan), remains applicable in view of that previously stated motivation reducing the impact of any single measurement error. As was identified above with respect to what is imaged, Examiner cautions against reading in limitations from the Specification (e.g. remarks at page 15, asserting there is some requirement for ‘live camera images’, a ‘nearest neighbor comparison’, etc.,). Applicant’s remarks directed to the limitations of claim 20 are non-persuasive because De Lange as applied requires no teaching of that ‘nearest-neighbor comparison of cached images’ which Applicant asserts is absent/missing (see additionally NPL citation U in the attached PTO-892). Not only is this feature not recited, but De Lange need not teach/suggest limitations for which another reference is relied upon. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites the limitation “receiving the one or more images via communications with a portable device comprising the at least one camera”. The underlined portion of the referenced limitation is unclear as it suggests the received images are ones received from a device that is different from that “a portable imaging device physically positioned adjacent to the base” established in line 4 of claim 1. Claim 7 recites the limitation “fuzzy images”. It is not clear what constitutes a ‘fuzzy image’. Stated differently, Examiner asserts that POSITA would not readily understand what constitutes a fuzzy image, even in view of Applicant’s Specification, and particularly in view of those considerations identified in the Response to Remarks above. It is not clear how the fuzzy image conveys/contains some measure/indicator of uncertainty, imprecision and/or multiple class memberships. Evidence of record does not support any finding that the term “fuzzy image” is one known/recognized in the art, and even a plain meaning reading (wherein e.g. “fuzzy” connotes a classification/membership on a continuum/that is non-binary) fails to sufficiently resolve how such a measure/indicator is necessarily conveyed. The language in question is indefinite at least because the specification does not clearly set forth a definition for the term(s), and the record as a whole does not lend clarity to what does and does not constitute a fuzzy image. Dependent claims 3-6 inherit and fail to cure that/those deficiencies identified above for the case of claim 2 and are rejected accordingly. Dependent claim 8 inherits and fails to cure that/those deficiencies identified above for the case of claim 7 and is rejected accordingly. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception, in particular an Abstract Idea – “determining, based on the comparison, the angle of the display relative to the base of the portable computer system” – falling under at least the (a) mathematical concepts category/grouping (mathematical relationships, formulas or equations, and/or calculations/operations) and/or (c) mental processes grouping (concepts performable in the human mind including an observation, evaluation, judgement, opinion), not ‘integrated into a practical application’ at Prong Two of Step 2A and without ‘significantly more’ at Step 2B. Step 1: The claim(s) in question are directed to a computer implemented method for calculating/determining a relative display/screen angle. (Step 1: Yes). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Representative/independent Claim(s) 1/9/16 recite – “determining, based on the comparison, the angle of the display relative to the base of the portable computer system”, falling under the mathematical concepts (comprising mathematical operations/calculations – and not necessarily requiring any explicitly recited equations per se) grouping, and/or mental processes grouping (concepts performable in the human mind including an observation, evaluation, judgement, opinion). Furthermore that ‘comparing’ recited at a high level of generality and placing few constraints on how such a comparison occurs, includes an interpretation that may be performed by a mental analysis/judgement of associated imagery, if not including one or more operations for calculating a similarity (i.e. a matching algorithm). Reference may be made to the July 17, 2024 PEG Example 47 claim 2 (e) analyzing step and e.g. Example 48. The 2024 PEG identifies various process steps as being drawn to the mathematical concepts Abstract Idea grouping – e.g. Example 47 claim 2 step(s) (b) (at page 7 describing the recited ‘discretizing’ as encompassing a mathematical concept e.g. rounding data values (that may also be performed mentally)) and (c) (interpreted so as to include mathematical calculations such as performing backpropagation and gradient descent algorithm(s)), in addition to Example 48 claim(s) 1 and 2 steps (b) (a ‘converting’ involving a mathematical operation using an STFT), (c) (determining (‘using’ a DNN) an ‘embedding’ on the basis of an explicitly recited formula), and (e) (‘applying binary masks’), and Example 48 claim 3 step(s) (c) (clustering using a k-means clustering algorithm) and (d) (binary masking clusters) - (see page 23 of the PEG – available https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf). MPEP 2106.04(a)(2)(C): A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation. (Step 2A, Prong One: Yes). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (1) identifying whether there are any ‘additional elements’ recited in the claim beyond the judicial exception, and (2) evaluating those ‘additional elements’ individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). Examiner notes for consideration at Prong Two of 2A that MPEP 2106.05(a), (b), (c), and (e) generally concern elements that may be indicative of integration, whereas 2106.05(f), (g), and (h) generally concern elements that are not likely indicative of integration. As an additional note, ‘additional elements’ are generally limitations excluded from interpretation under the Abstract Idea groupings, and may comprise portions of limitations otherwise identified as falling under those Abstract Idea groupings of the 2019 PEG (e.g. any ‘determination’ that may be made mentally accompanied by the use of a neural network and/or generic computer hardware considered under the ‘apply it’ considerations of 2106.05(f)). Any ‘providing’/outputting broadly, and ‘collection’ of data (i.e. image acquisition(s)), be they images for training any learning model and/or data/images visually observable/ evaluated by a user/operator, also fail(s) to integrate at least in view of MPEP 2106.05(g) (extra-solution data gathering/output) and/or 2106.05(h) as ‘generally linking’ the exception to a field of use involving machine learning and/or imagery so acquired. Examiner also pre-emptively notes with respect to 2106.05(a), that ‘functioning of a computer’ (see fact pattern of Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1336, 118 USPQ2d 1684, 1689 (Fed. Cir. 2016)) does not constitute operations that a general purpose computer may be programmed/configured to perform, since functioning of a computer instead concerns functions integral to the way computers operate. Regarding the claim(s) ‘as a whole’, the requirement for considering the claim as a whole stems from the fact that the judicial exception alone cannot provide the improvement, and any ‘additional elements’ are not evaluated in a vacuum separate from the weight of those directed to the exception. Consideration must be given to the degree/extent to which any apparent/disclosed improvement, as it is realized in recited claim language, is to the exception itself or otherwise distinct from it and captured by those limitations clearly serving as ‘additional elements’ after analysis at Prong One, in addition to how the ‘additional elements’ weigh in comparison to those limitations directed to the exception. Reference may be made to the recent (08/04/2025) memo affirming analysis set forth in the 2024 PEG (https://www.uspto.gov/sites/default/files/documents/memo-101-20250804.pdf) and consistent with guidance to date. While calculating/determining a relative display angle may have a multitude of uses (furthering an assertion that such a calculating/defining is a “basic tool of scientific and technological work” Alice Corp., 573 U.S. at 216, 110 USPQ2d at 1980; Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012)), it is not in itself a ‘practical application’ distinct from the exception. Additional elements comprising e.g. generic computer hardware for implementing the method, common hardware thereof (e.g. claims 2-6), etc., are insufficient for integration in view of MPEP 2106.05(f) and/or (h) for generally linking the exception to an environment wherein one or more sensors/ communication components may be utilized (MPEP 2106.05(h)) in obtaining imagery for what amounts to necessary data gathering (MPEP 2106.05(g)) for otherwise commonly/routinely performed image based pose calculation operations (e.g. utilizing known principles of trigonometry and epipolar geometry). No additional elements outside of those directed to the exception itself, appear to explicitly/specifically capture/recite any disclosed improvement in technology (MPEP 2106.05(a)) – particularly because the claim rests with and is weighed heavily by the calculation/operation/exception – and even if done novelly, such a calculation is not statutory subject matter as the improvement cannot be to the exception. With reference to MPEP 2106.05(a): It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) Even when viewed in combination, the ‘additional elements’ present do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: No), and the claims are directed to the judicial exception. (Revised Step 2A: Yes [Wingdings font/0xE0] Step 2B). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to ‘significantly more’ than the recited exception, i.e., whether any ‘additional element’, or combination of additional elements, adds an inventive concept to the claim. The considerations of Step 2A Prong 2 and Step 2B overlap, but differ in that 2B also requires considering whether the claims feature any “specific limitation(s) other than what is well-understood, routine, conventional activity in the field” (WURC) (MPEP 2106.05(d)). Such a limitation if specifically recited however, must still be excluded from interpretation under any of the Abstract Idea groupings. Step 2B further requires a re-evaluation of any additional elements drawn to extra-solution activity in Step 2A (e.g. gathering video/image(s)) – however no limitations appear directed to any novel collection per se. Limitations not indicative of an inventive concept/ ‘significantly more’ include those that are not specifically recited (instead recited at a high level of generality and e.g. outcome/result oriented), those that are established as WURC, and/or those that are not ‘additional elements’ by nature of their analysis at Prong One (i.e. reciting the exception/calculation(s)). Concerning limitations excluded from serving in ‘significantly more’ determinations on the basis of their being broadly recited and outcome/result oriented, absent recited limitations describing how such a result is achieved, Applicant may consider Longitude Licensing Ltd. v. Google LLC, No. 24-1202, (Fed. Cir. April 30, 2025) (available at https://www.cafc.uscourts.gov/opinions-orders/24-1202.OPINION.4-30-2025_2506816.pdf) (see e.g. pages 7-9). While it is the MPEP that governs Examination and not necessarily case law, this opinion and those referenced therein serve to illustrate the manner in which claims that do not explicitly recite/capture how a purported inventive concept/ improvement is actually achieved by limitations excluded from a reading under the exception, are not likely to be determined eligible/enforceable. Reference may also be made to the 2024 PEG describing that an improvement/ inventive concept (for ‘significantly more’ determination(s)) cannot be to the judicial exception itself. (Step 2B: No). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1-6, 9-10, 14-16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US 2020/0241826 A1). As to claim 1, Srinivasan discloses a method for determining an angle of a display relative to a base of a portable computer system, comprising: receiving one or more images (image(s) 244, [0026], etc.,) of a portion of the portable computer system captured via at least one camera of a portable imaging device physically positioned adjacent to the base (Figures 8 and 12, portable imaging device 240), the portable imaging device being configured to communicatively couple to the portable computer system through a wired or wireless interface (240 configured for being communicatively coupled with 100 in view of network interface 180 of 100 and network 246 (e.g. in view of compensation 258 sent to 100) – and that ‘network interface’ of 240/242 (not illustrated – see [0026] “The camera 242 has an internal hardware processor, solid-state memory device, and network interface (not shown for simplicity) that sends or communicates the digital image 244 via a communications network 246 to another information handling system 100b”, wherein 100 is a portable computer system under even a narrow reading [0016] “FIG. 1 illustrates a generalized embodiment of information handling system 100, such as a dual visual surface system. For purpose of this disclosure information handling system 100 can include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, entertainment, or other purposes. For example, information handling system 100 can be a personal computer, a laptop computer, a smart phone, a tablet device or other consumer electronic device”); comparing the one or more images to a set of cached images of the computer system ([0027] “The compensation application 250 includes instructions or code that cause the processors 102/104 to perform operations, such as comparing the digital image 244 to a reference image 252”, [0030], [0042], [0043-0044], etc.,); and determining, based on the comparison, the angle of the display relative to the base of the portable computer system (Figs. 6, 15 and 17, determining e.g. rotational misalignment 222 and/or angular offset Φ 302, [0024] “FIGS. 5-6 illustrate additional misalignments 210. FIG. 5 illustrates a horizontal offset 220 in an x-direction (with respect to the origin 216 of the x-y Cartesian coordinate system 218). The horizontal offset 220 may result in a cropage of a portion of either or both of the display 212a and 212b. FIG. 6 illustrates a rotational misalignment 222 between the display images 212a and 212b. The rotational misalignment 222 is illustrated as an angle θ (illustrated as reference numeral 224) about the origin 216 of the x-y Cartesian coordinate system 218. The rotational misalignment 222 may thus be a twist or skew of either or both of the display images 212a and 212b about the origin 216 of the x-y Cartesian coordinate system 218”, [0038] “Because the leaves of the hinge 204 interconnect the housings 206a and 206b, the hinge 204 itself may contribute to the misalignment 210 between the two displays 133 and 134. The hinge 204, in other words, has a longitudinal axis LH-LH (illustrated as reference numeral 300) that may have an angular offset Φ (illustrated as reference numeral 302) with respect to the origin 216 of the x-y Cartesian coordinate system 218)”, etc.,), the base comprising a chassis housing one or more components of the portable computer system and coupled to the display via a hinge type mechanism (Fig. 1 wherein the one or more components may further comprise a second/additional display, chassis of 100 in view of hinge 204). For any assertion that the relative display angles calculated in Srinivasan are relative to the coordinate system 218 only, as opposed to any base component per se (Examiner would disagree with such an assertion because for the case of e.g. Fig. 17, the relative display angle Φ involves longitudinal axis 300 of hinge 204, and for those instances that the hinge is not characterized by a mechanical misalignment the axis of the hinge aligns with y-axis of 218), it would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan, given coordinate system 218, to also ascertain the location of any of housing/base components of 100 relative thereto (such as axis 300 of hinge 204 relative to the y-axis of 218, see also [0052]) as taught/suggested by Srinivasan, the motivation as similarly taught/suggested therein (e.g. [0040]) that such a relative measure between base components would enable more accurately/comprehensively characterizing a broader measured misalignment 210, and thereby better facilitate subsequent correction/calibration. As to claim 2, Srinivasan discloses the method of claim 1. Srinivasan further discloses the method wherein the receiving comprises receiving the one or more images via communications with a portable device comprising the at least one camera (images 244 are received by 248 from vision system 240 comprising camera 242 via 246). As to claim 3, Srinivasan discloses the method of claim 2. Srinivasan further discloses the method wherein the communications comprise wireless communications ([0054]). As to claim 4, Srinivasan discloses the method of claim 3. Srinivasan further discloses the method wherein the wireless communications are according to at least one of a Wi-Fi based communications protocol or a Bluetooth based communications protocol ([0054] “Exemplary embodiments may be easily adapted to stationary or mobile devices having cellular, wireless local area networking capability (such as WI-FI®), near field, and/or BLUETOOTH® capability”). As to claim 5, Srinivasan discloses the method of claim 2. Srinivasan further discloses the method wherein the wireless communications are supported by an input/output bus and/or peripheral bus between the portable device and the portable computer system ([0017-0018], [0019] “I/O interface 170 includes a peripheral interface 172 that connects the I/O interface to an add-on resource 174 and to network interface 180. Peripheral interface 172 can be the same type of interface as I/O channel 112, or can be a different type of interface. As such, I/O interface 170 extends the capacity of I/O channel 112 when peripheral interface 172 and the I/O channel are of the same type, and the I/O interface translates information from a format suitable to the I/O channel to a format suitable to the peripheral channel 172 when they are of a different type. Add-on resource 174 can include a data storage system, an additional graphics interface, a network interface card (NIC), a sound/video processing card, another add-on resource, or a combination thereof. Add-on resource 17 4 can be on a main circuit board, on separate circuit board or add-in card disposed within information handling system 100, a device that is external to the information handling system, or a combination thereof”, [0020] “Network interface 180 represents a NIC disposed within information handling system 100, on a main circuit board of the information handling system, integrated onto another component such as chipset 110, in another suitable location, or a combination thereof. Network interface device 180 includes network channels 182 and 184 that provide interfaces to devices that are external to information handling system 100. In a particular embodiment, network channels 182 and 184 are of a different type than peripheral channel 172 and network interface 180 translates information from a format suitable to the peripheral channel to a format suitable to external devices. An example of network channels 182 and 184 includes InfiniBand channels, Fibre Channel channels, Gigabit Ethernet channels, proprietary channel architectures, or a combination thereof. Network channels 182 and 184 can be connected to external network resources (not illustrated). The network resource can include another information handling system, a data storage system, another network, a grid management system, another suitable resource, or a combination thereof”). As to claim 6, Srinivasan discloses the method of claim 5. Srinivasan further discloses the method wherein the input/output bus and/or peripheral bus comprises a universal serial bus (USB disclosure of [0017-0018]). As to claim 9, this claim is the non-transitory CRM claim corresponding to the method of claim 1 and is rejected accordingly (Srinivasan [0059-0061]). As to claim 10, Srinivasan discloses the CRM of claim 9. Srinivasan further discloses the CRM wherein the program instructions are further executable by the at least one processor of the portable computer system to cause the portable computer system to compile the set of cached images via a calibration algorithm ([0034] “The information handling system 100 (whether the electronic book 200 or the mobile smartphone 202) may thus be instructed to display known or predetermined images. Calibration may thus be performed while displaying a template of horizontal and vertical patterns, such as during a manufacturing step or a service call. The compensation database 260 may be preloaded with different combinations of values representing the vertical offset 214, the horizontal offset 220, and/or the rotational misalignment 210 and the corresponding compensation 258”, [0040] “The angular offset θ 302 may thus be an additional calibration input to help compensate for the misalignment 210 between the displays 133 and 134”, [0041] “Exemplary embodiments thus present a factory/service calibration and software solution”, [0048-0050], [0049] “The template used for the calibration can be used in a system that includes one or more image capture devices. The system can be a part of an autonomous environment that can be implemented at the point of manufacture of the hinged information handling system. The processor can receive information that is associated with an angle of a hinge of the hinged information handling system”, [0051] “The first and second pixel arrays can thereby be used to select the template that is used during the calibration”, etc.,). As to claims 14-15, Srinivasan discloses the CRM of claim 9. Srinivasan further discloses the CRM wherein the one or more images include an oblique view of at least a portion of the portable computer system (for claim 14, and further for the case of claim 15, said portion is at least a portion of the display) (Figs. 8 and 12 illustrate the capture of oblique views of 133 and 134 (a plane in which 133/134 predominately exist is not parallel to a plane in which the image exists, as captured from the viewpoint(s) of 240/240a/240b)) As to claim 16, this claim is the system claim corresponding to the method of claim 1 and is rejected accordingly. As to claim 18, Srinivasan discloses the system of claim 16. Srinivasan further discloses the system wherein the one or more images include one or more area of interests (AoIs) ([0026] “The digital image 244 has information or data representing one, or both, of the images 212a and 212b respectively displayed by the displays 133 and 134”). As to claim 19, Srinivasan discloses the system of claim 18. Srinivasan further discloses the system wherein the one or more AoIs include distinctive markings or patterns (e.g. [0025] lines of text 232a and b, [0030] “(such as any target grid, pattern, or shape)”, [0032] “Calibration may thus be performed while displaying a template of horizontal and vertical patterns, such as during a manufacturing step or a service call”, Fig. 11, etc.,). 2. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US 2020/0241826 A1) in view of Tizhoosh et al. “Fuzzy Image Processing”. As to claims 7-8, Srinivasan discloses the method of claim 1. Srinivasan fails to explicitly disclose the method wherein the one or more images comprise one or more fuzzy images. See the corresponding 112(b) rejections above as it is not clear what constitutes a fuzzy image. For the purposes of compact prosecution Examiner reads a fuzzy image as only being different from an image that is not a fuzzy image, based on it being used in (but not necessarily resultant from – see remarks above) a ‘fuzzy matching algorithm’. Srinivasan fails to explicitly disclose such a fuzzy matching algorithm used in any comparison of those cached/template/reference images 252 to those acquired/received 244. Tizhoosh evidences the obvious nature of utilizing a fuzzy matching algorithm to characterize compared imagery (page 706 (page 26 of the supplied document) Section 22.5.2 comprising Fuzzy correlation section “Fuzzy correlation is used either to quantify the correlation of two features within the same image or, alternatively, the correlation of the same feature in two different images. Examples of features are brightness, edginess, texturedness, etc. More detailed information about the theory on common measures of fuzziness can be found in [13, 14, 21, 35, 36, 37, 38]. A variety of practical applications are given by [19, 20, 29, 39, 40, 41, 42]” and section 22.5.4 Fuzzy/possibilistic clustering; Tizhoosh explicitly discloses the manner in which a fuzzy classification/match determination may be more accurate particularly for instances characterized by a high amount of features falling at or on a separation line between membership classes (e.g. match vs no match), and how such an algorithm better quantifies a partial membership of the same object/query image to more than one matching/classes/images). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan such that the template/reference image matching therein further comprises utilizing a fuzzy matching algorithm to characterize compared imagery, as such an algorithm may more accurately assign a query image to its closest/most similar match among template/reference images (and various display misalignments accordingly), particularly for those instances wherein the query/test image is e.g. substantially centered between two associated reference misalignment conditions/values. 3. Claims 11-12, 14-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US 2020/0241826 A1) in view of Bleyer et al. (US 2020/0111232 A1). As to claim 11, Srinivasan discloses the CRM of claim 10. Srinivasan further discloses the CRM wherein to perform the calibration algorithm, the program instructions are further executable by the at least one processor of the portable computer system to cause the portable computer system to receive images via the at least one camera (see Srinivasan as identified above for the rejection of claim 10) Srinivasan fails to explicitly disclose 244 as being characterized by any fixed angular interval. Bleyer further evidences the obvious nature of causing a system to receive images via at least one camera at a fixed angular interval (angular interval between perspectives 1600A, B and C – see Figures 16A-C, wherein the camera/HMD’s optical axis, [0080], [0073], [0061] etc.,). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan such that the received imagery concern fixed angular intervals as taught/suggested by Bleyer (wherein the displayed calibration markers of Fig. 16A-16C are analogously displayed on 133 and 134 of Srinivasan), the motivation as similarly taught/suggested therein that such an acquisition/calibration based on multiple perspectives at an angular interval serves to reduce the impact of any single measurement error. As to claim 12, Srinivasan in view of Bleyer teaches/suggests the CRM of claim 11. Srinivasan fails to explicitly disclose the CRM wherein the fixed angular interval is at least one of one degree or less, five degrees or less, or ten degrees or less. Srinivasan as modified by Bleyer suggests an acquisition/images that are at e.g. close to perpendicular (1600B) and oblique/non-perpendicular angles relative to the display screen (1600A and 1600C, [0080]), thereby suggesting an angular interval that is at least less than 90 degrees. Applicant’s Specification does not disclose and/or suggest the manner in which the recited angular interval may be anything other than a design choice constraint readily implemented by POSITA. POSITA would also be aware of the manner in which a smaller angular interval may allow for a more accurate/resolved pose determination(s)/relative angle measures even if at the cost of computational resources. It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to further modify the system and method of Srinivasan in view of Bleyer such that the fixed angular interval is at least one of those recited, as such an angle interval is at least an “Obvious to Try” angular interval chosen from a finite number of identified and predictable angular intervals characterized by a reasonable expectation of success (MPEP 2143 Rationale (E)). As to claims 14-15, Srinivasan discloses the CRM of claim 9. Srinivasan at the minimum suggests the CRM wherein the one or more images include an oblique view of at least a portion of the portable computer system (for claim 14, and further for the case of claim 15, said portion is at least a portion of the display) (Figs. 8 and 12 illustrate the capture of oblique views of 133 and 134 (a plane in which 133/134 predominately exist is not parallel to a plane in which the image exists, as captured from the viewpoint(s) of 240/240a/240b) - Examiner notes ‘the display’ (claim 15) is of the ‘portable computer system’ (claim 14), and in addressing the limitations of claim 15 those of claim 14 are similarly addressed). Bleyer further evidences the obvious nature of a calibration wherein one or more images include an oblique view of at least a portion of the display ([0073], [0080], Fig. 16A and C). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan such that one or more images include an oblique view of at least a portion of the display as taught/suggested by Srinivasan and Bleyer, the motivation as similarly taught/suggested therein (Bleyer [0079-0080]) that such an oblique view would enable capturing a skew of associated image features (e.g. markers 1510 of Bleyer but optionally features of the display itself, and analogous to the displayed patterns/markings of Srinivasan) relative to an orthogonal view/known attributes of the markers (similarly known object/display dimensions), to determine the capturing device’s positional relationship relative thereto in a manner better constrained in terms of total degrees of freedom. While not relied upon Yamazaki US 11,682,331 B2 (col 25) is also of particular note as oblique angles in some instances may suggest the user is not actually intending to view their mobile device through the HMD. As to claim 17, this claim is the system claim corresponding to the CRM claim(s) 14/15, and is rejected accordingly 4. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US 2020/0241826 A1) in view of Raghoebardajal et al. (US 2016/0337630 A1). As to claim 13, Srinivasan discloses the CRM of claim 9. Srinivasan fails to explicitly disclose CRM wherein each cached image of the set of cached images includes metadata associated with a value of an angle between the display and the base of the portable computer system. Srinivasan falls silent regarding the metadata of images 252 (at least in terms of any stored metadata, as opposed to an interpretation wherein the ‘metadata’ may be any data relevant/associated at some level of abstraction, but not necessarily stored as part of the image files themselves – Examiner understands ‘metadata’ to be data that is actually part of a stored image file). Examiner also notes the common nature of e.g. EXIF data comprising an orientation flag, which may read on ‘metadata associated with a value of an angle’ (which may not require any angle per se but instead data indicative of/associated with such an angle). Raghoebardajal however evidences the obvious nature of image metadata associated with a value of an angle for a device at a time of capture ([0088] “In embodiments of the invention, the view matrix data is stored in association with the captured image, for example as so-called metadata which is stored and/or transmitted as part of the overall image data package, for example by a camera apparatus Such as that described below with reference to FIG. 27 comprising an image capture device for capturing an image; a position and/or orientation detector for detecting the position and/or orientation of the camera apparatus at the time of capture of the image; and a metadata generator for associating metadata with the image, the metadata indicating the detected position and/or orientation of the camera apparatus at the time of capture of the image”). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan such that associated information at the time of capturing one or more template/reference images, is stored in a metadata section thereof as taught/suggested by Raghoebardajal, the motivation as similarly taught/suggested therein and readily recognized by POSITA that such a storage of metadata information ensures the link between such information and the template image itself remains preserved (even if for example the template images were to be relocated/transferred, etc., particularly in view of remote/server processing embodiments), in a non-destructive manner and while following the same/common practice of embedding descriptive material into the image file itself such as intrinsic and extrinsic camera parameters, GPS location, etc.. 5. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Srinivasan et al. (US 2020/0241826 A1) in view of De Lange (US 2024/0023931 A1). As to claim 20, Srinivasan discloses the system of claim 16. Srinivasan fails to explicitly disclose the system further configured to perform the steps/functional language recited in claim 20. Examiner notes however that POSITA would recognize that for any matching involving a quantification of a degree of similarity between at least two potential matches, it would be possible to calculate/interpolate an associated value based thereon. Stated differently, a query/test image that is quantified as having a 60% match with the 5 degree template and a 40% match with a 10 degree template as the two highest matches, suggests the query image is likely at 7 degrees (even if there is no template image corresponding to 7 degrees exactly). De Lange (analogous art under at least a reasonable pertinence theory (MPEP 2151.01(a)) for the common problem that is determining an image based match/similarity) evidences the obvious nature a system configured to: determine that none of the set of cached images are an exact match to the one or more images; select two or more nearest matched images of the set of cached images; and determine the angle of the display relative to the base via an interpolation using the two or more nearest matched images ([0098] “Selecting the reference image 62 may comprise identifying a reference image with an exactly matching associated reference point displacement 24. Alternatively, where an exact match with the measured reference point displacement cannot be found, one of the reference images having an associated reference point displacement which most closely matches may be selected. Alternatively, and as will be discussed in more detail later, in some cases, a new reference image may be generated based on interpolating between two reference images which correspond to reference point displacements either side of the measured reference point displacement 54”). It would have been obvious to a person of ordinary skill in the art, before the effective filing date, to modify the system and method of Srinivasan such that the template based image matching thereof further comprises interpolating a determined angle/match result on the basis of two or more nearest matches upon determining that an exact match is not met, as taught/suggested by De Lange, the motivation as similarly taught/suggested therein that such an interpolation/calculation allows for the use of less total template images, thereby reducing the associated storage/memory burden. Additional References Prior art made of record and not relied upon that is considered pertinent to applicant's disclosure: Additionally cited references (see attached PTO-892) otherwise not relied upon above have been made of record in view of the manner in which they evidence the general state of the art. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to IAN L LEMIEUX whose telephone number is (571)270-5796. The examiner can normally be reached Mon - Fri 9:00 - 6:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IAN L LEMIEUX/Primary Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Oct 24, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §101, §103, §112
Nov 25, 2025
Response Filed
Feb 02, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602825
Human body positioning method based on multi-perspectives and lighting system
2y 5m to grant Granted Apr 14, 2026
Patent 12592086
POSE DETERMINING METHOD AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586397
METHOD AND APPARATUS EMPLOYING FONT SIZE DETERMINATION FOR RESOLUTION-INDEPENDENT RENDERED TEXT FOR ELECTRONIC DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579840
BEHAVIOR ESTIMATION DEVICE, BEHAVIOR ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573086
CONTROL METHOD, RECORDING MEDIUM, METHOD FOR MANUFACTURING PRODUCT, AND SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
97%
With Interview (+9.6%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month