Prosecution Insights
Last updated: April 19, 2026
Application No. 18/560,105

SUBJECT ANALYSIS DEVICE

Non-Final OA §101§102
Filed
Nov 10, 2023
Examiner
CAO, VINCENT M
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Natsume Research Institute Co. Ltd.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
86%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
246 granted / 448 resolved
+2.9% vs TC avg
Strong +32% interview lift
Without
With
+31.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
18 currently pending
Career history
466
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 448 resolved cases

Office Action

§101 §102
DETAILED ACTION Status of Claims This Action is in response to Application 18/560,105 filed 11/10/2023. Claims 1-5 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The following reference(s) have not been considered as no translated copy has been provided: JP 5445981 JP 6651536 Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Although the preamble of the claims indicate that the claims are directed to a machine, and a machine is one of the statutory classes of invention, the claims do not define a machine as the claims do not recite any structural features. The claims currently recite “question output unit”, “answer display unit”, and “gaze position analysis unit” which is not specifically defined in the claims and defined in paragraph 30 of the originally filed specification as being distinct from hardware components such as a control or subject display unit. Under the broadest reasonable impression in view of the originally filed specification, the claimed units are refers to software and software per se is not a statutory class of invention. For the reasons noted, the claims are rejected because they are not directed to statutory subject matter. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ellison (US 20200253527 A1) (hereafter Elision). As per claim 1: A subject analysis device (1) comprising: a question output unit (21) that outputs a question to a subject; an answer display unit (5) that displays a candidate answer to the question on a subject's display unit (3); and (See Ellison ¶0266, “For example, with the image set stimulus removed, the user can be tasked to answer the question, “what color was the flower?” and/or, given a selection of multiple choice options to identify, the choice flower most similar to the one in the image or image set.” Ellison discloses a device including output/display unit for presenting questions and candidate answers.) a gaze position analysis unit (7) that analyzes a gaze position of the subject. (See Ellison ¶0081, “The interactive measures and/or an analysis of switch rates may use both user-identified switches combined with objective measures such as eye tracking analysis to detect a shift in the user's gaze or eye focus from the spatial location of a contiguity in Image #1 to a contiguity in Image #2 and/or Image #3 or EEG, ERP analyses, and/or functional magnetic resonance images (fMRI) to objectively track switching events. Optionally, throughout this specification, any time an eye is tracked, the eye may be tracked automatically via a camera in system 100, and analyzed by the processor system of system 100 or 200. Switch events such as eye tracking can also be monitored using EEG tools in part, because of the integration of real-world images into these dynamic image sets and the recognition/discovery process which can occur when the ground image becomes confluent coincident with and/or part of a switch event. The potential for identifying evoked/event response potentials together with eye tracking data, as well more sophisticated biometrics, imaging and analytical tools, can be used to improve these measurements and assess their potential value as part of building a diagnostic profile of cognitive function and status.” Ellison discloses gather and analyzing the gaze position.) As per claim 2: The subject analysis device (1) according to claim l, wherein the gaze position analyzed by the gaze position analysis unit (7) is added to an image information in the question output unit (21) and the answer display unit (5) to generate a gaze position-added image information, and an image based on the gaze position-added image information is displayed on an analyzer's display unit (2). (See Ellison ¶0087, “e interactivities using enriched real-world images include assessments embedded in interactivities, and which may include assessments of speed and accuracy measurements. The interactivities using enriched real-world images may be combined with simple questions (e.g., about the image). The use of enriched real-world images may improve the quantity and quality of captured data (as compared to other images and/or questions used), allowing for direct measurements of overall cognitive status as well as domain-specific task/skill metrics, towards developing sensitive, reliable cognitive tools. There is no requirement to use real world images—any image may be used, but the integrated use of enriched, real-world color image content increases the effectiveness of the platform's cognitive assessment capabilities as compared to simplified black and white illustrations and/or drawings of individual images for descriptions and user interactions with the content and interactivities, and as general input stimuli for engaging cognition. The use of enriched, real-world color image content helps keep the user interested and also engages (or at least tends to engage) more cognitive abilities in a given task based on the complexity of the information that the user is processing. Assessments data may be derived from speed and/or accuracy measurements made using the App, and/or from questions, including SQ2 (Spatial, Quantitative and Qualitative) type questions, such as “what color was the flower?,” “what do you see?,” “where was the bird?,” “which bird's looks most like the one you saw?,” and/or what do recall seeing, for example. In an embodiment, the question asked regarding an image may be open ended or closed. In an embodiment, the question is one that only requires a one word or one phrase response. In an embodiment, the question is seven or less words. In an embodiment, the question is 10 or less words. In an embodiment, the question is 15 or less words. In an embodiment, the question is 20 or less words. In an embodiment, the question requires that the user analyze interleaved image sets, focusing a range of cognitive abilities in the process, including language and memory domains and subdomains, but may also make use of attention, visual spatial, and/or executive function processes and skills. In situations where some users may not have firsthand experience with the content of an image, for example, a field of sunflowers, but the user has experienced flowers, the image set can still be of value in training, treatment, and assessment. Similarly, while lakes are familiar to a significant number of people, even those who have never experienced a lake can recognize a lake and the lake's relationship to water and/or a body of water.” Ellison discloses associating gaze information with image information including tracking gaze in association with questions and answers.) As per claim 3: The subject analysis device (1) according to claim 1, wherein a character and an ability, a mental and physical condition, an interest and a preference, and a presented stimulus of the subject are evaluated based on a change in the gaze position analyzed by the gaze position analysis unit (7). (See Ellison ¶0061, “The state of cognitive function may be related to brain health, well-being, reasoning, decision-making, learning styles, skills development in both healthy individuals and in those with changes in brain health associated with disease conditions. There are a diversity of processes, changes, differences, impacts, and/or altered states which may be reflected in a range of diseases and conditions that have overlapping symptoms and therefore similar impacts on one or more cognitive processes. Some examples of conditions with a cognitive component, include: ADHD, ADD, Autism, Multiple Sclerosis, Parkinson's disease, Type II Diabetes, Atrial Fibrillation, Stroke, Aging, Mild Cognitive Impairment, Alzheimer's disease and other dementias, stress, chemotherapy, post-anesthesia cognitive dysfunction, Schizophrenia, among other transient, progressive, acute and/or chronic physiological, psychological neuromuscular and other conditions.” See also Ellison ¶0076, “The hierarchy in which the image with the highest contiguity ranking in terms of dominance will assume the ground position can be driven in part by the contiguity's characteristics and user's/viewer's input and/or bias and/or preferences. The multi-stable capacity is nonetheless conferred on an image based on the individual image's essential contiguity characteristics and are metered by the combination of the image with other images in terms of the expression of the contiguity. The multi-stable relationship is evident in comparing FIGS. 24C and 24E, and/or FIGS. 24D and 24F where the contiguities have been removed from the multi-stable 2-image composite in FIGS. 24C and 24D to generate the stable configuration in FIGS. 24E and 24F, respectively. Both stable and multi-stable constructions can be generated using any of the platform's modalities, device-based, offline tangible components, and/or a hybrid version using a Tangible User Interface (TUI) prop and active surface.” Ellison discloses determining multiple elements of the subject.) As per claim 4: The subject analysis device (1) according to claim 1, wherein the subject analysis device (1) further comprises a pupil diameter analysis unit (11) that analyzes a pupil diameter of the subject. (See Ellison ¶0357, “Similarly, the platform can be integrated with add-ons for monitoring other biometrics. As a non-limiting example, the platform may be integrated with multi-channel EEG, single channel EEG, eye-tracking, a heart rate monitor, respiratory rate monitor, blood pressure monitor, galvanic skin response monitor, pupil dilation monitor, temperature monitor, and/or other spatial, temporal and/or frequency brain states monitoring as assessment tools.” Ellison discloses tracking pupil dilation.) As per claim 5: The subject analysis device (1) according to claim 4, wherein the gaze position analyzed by the gaze position analysis unit (7) is displayed on the analyzer's display unit (2), and a color, a size, or a shape of the gaze position displayed on the analyzer's display unit (2) are changed based on the pupil diameter of the subject analyzed by the pupil diameter analysis unit (11). (See Ellison ¶0138, “Healthcare worker interface 230 is the interface via which the healthcare worker (or other practitioner) interacts with the system 200 for collaborating with other healthcare workers, reviewing test results, and/or progress of patients, and/or for assigning assessment and/or therapy to patients. The terms healthcare worker and practitioner are used interchangeably throughout the specification and may be substituted one for another to obtain different embodiments.” Ellison discloses the concept of an output for analyzer for reviewing collected information regarding the subject.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Parra et al. (WO 2020236331 A2), which talks about assessment of audience attention including gaze and eye tracking. Karmarkar et al. (US 20150213634 A1), which talks about modifying content based on user state including eye tracking. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT M CAO whose telephone number is (571)270-5598. The examiner can normally be reached Monday - Friday 11-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ILANA SPAR can be reached at (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT M CAO/Primary Examiner, Art Unit 3622
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602668
METHOD FOR GENERATING RECYCLING RECORD OF SOLAR PANEL AND RECYCLING SYSTEM IMPLEMENTING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12602709
DYNAMICALLY GENERATING AND SERVING CONTENT ACROSS DIFFERENT PLATFORMS
2y 5m to grant Granted Apr 14, 2026
Patent 12561714
SYSTEMS AND METHODS FOR AUTOMATICALLY DETERMINING USER VETERAN ATTRIBUTES AND UPDATING A VETERAN PROFILE
2y 5m to grant Granted Feb 24, 2026
Patent 12524782
SYSTEM FOR PROVIDING IMPRESSIONS BASED ON CONSUMER PREFERENCES FOR FUTURE PROMOTIONS
2y 5m to grant Granted Jan 13, 2026
Patent 12499463
COMMODITY REGISTRATION SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
86%
With Interview (+31.5%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 448 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month