Prosecution Insights
Last updated: April 19, 2026
Application No. 18/254,771

SYSTEMS AND METHODS FOR GENERATING A USER PROFILE

Non-Final OA §101§102§112
Filed
May 26, 2023
Examiner
JIAN, SHIRLEY XUEYING
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Sofi Health Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
86%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
456 granted / 734 resolved
-7.9% vs TC avg
Strong +24% interview lift
Without
With
+23.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
767
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 734 resolved cases

Office Action

§101 §102 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The current application has the effective filing date of 12/01/2020 according to the priority date on the record. Claim Status As per preliminary claim amendment received on 05/26/2023, claims 1-25 are pending. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5, 16, 18-20 and 23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Regarding claim 5, the phrase “and optionally wherein” renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claims 16, 18-20 and 23 each recite “optionally” and are reach each rejected as indefinite for the same rationale. Claim Interpretation To all pending claims, claim term “sentiment metric” is interpreted as a metric of a sentiment or feelings. (Specification, pg. 2 Summary of Invention). To claims 11 and 23, these claims recites “if” statements, which are contingent limitations according to the MPEP 2111.04 II: II. CONTINGENT LIMITATIONS The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met. For example, assume a method claim requires step A if a first condition happens and step B if a second condition happens. If the claimed invention may be practiced without either the first or second condition happening, then neither step A or B is required by the broadest reasonable interpretation of the claim. If the claimed invention requires the first condition to occur, then the broadest reasonable interpretation of the claim requires step A. If the claimed invention requires both the first and second conditions to occur, then the broadest reasonable interpretation of the claim requires both steps A and B. The broadest reasonable interpretation of a system (or apparatus or product) claim having structure that performs a function, which only needs to occur if a condition precedent is met, requires structure for performing the function should the condition occur. The system claim interpretation differs from a method claim interpretation because the claimed structure must be present in the system regardless of whether the condition is met and the function is actually performed. See Ex parte Schulhauser, Appeal 2013-007847 (PTAB April 28, 2016) for an analysis of contingent claim limitations in the context of both method claims and system claims. In Schulhauser, both method claims and system claims recited the same contingent step. When analyzing the claimed method as a whole, the PTAB determined that giving the claim its broadest reasonable interpretation, "[i]f the condition for performing a contingent step is not satisfied, the performance recited by the step need not be carried out in order for the claimed method to be performed" (quotation omitted). Schulhauser at 10. When analyzing the claimed system as a whole, the PTAB determined that "[t]he broadest reasonable interpretation of a system claim having structure that performs a function, which only needs to occur if a condition precedent is met, still requires structure for performing the function should the condition occur." Schulhauser at 14. Therefore "[t]he Examiner did not need to present evidence of the obviousness of the [ ] method steps of claim 1 that are not required to be performed under a broadest reasonable interpretation of the claim (e.g., instances in which the electrocardiac signal data is not within the threshold electrocardiac criteria such that the condition precedent for the determining step and the remaining steps of claim 1 has not been met);" however to render the claimed system obvious, the prior art must teach the structure that performs the function of the contingent step along with the other recited claim limitations. Schulhauser at 9, 14. The contingent limitation interpretation can be overcome by replacing “if” with “when.” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim 25 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. This claim does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to “[a] computer program comprising computer program code means…” Currently, a computer program code is understood as one or more transitory signals, and it is not patent eligible. In order to qualify as patent eligible matter, the computer program code must be stored in non-transitory computer readable memory. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-25 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Muhsin US 11,464,410. PNG media_image1.png 524 716 media_image1.png Greyscale Regarding claim 1, Muhsin discloses a system for obtaining a user profile (user interface, exemplary as shown in Figs. 11D-11H) comprising a plurality of user sentiment metrics (notification inputs 1138), the system comprising: a display unit (patient information device 1100, e.g. tablet computer has a display) adapted to display a representation (1138 bubble icons) of the plurality of user sentiment metrics (notification inputs 1138 representing a user’s needs and feelings), wherein each representation (1138) occupies a proportion of a display area of the display unit (see Figs. 11D-11H); a user interface (1100 has a display/output and input, e.g. touchscreen, keyboard, mouse etc., col.25, ll.5-11) in communication with the display unit adapted to receive a user input (as shown in Fig. 11A receiving input); and a processing unit (patient monitoring device 1100, e.g. tablet computer has a processor), in communication with the display unit and the user interface (1100), wherein the processing unit is adapted to: control the display unit (1100) to adjust a characteristic of the representation of a user sentiment metric based on the user input (visual prominence, e.g. size, brightness, etc. selected by a user), and wherein the characteristic of the representation of a user sentiment metric represents the magnitude of the user sentiment metric (col.26, ll.35-50 adjust size of each of 1138 based on frequency of use of each metric as selected by a user or group of common users); and generate a user profile (Figs. 11E-11F is a profile personalized by a user) based on the adjusted characteristic of each representation of the plurality of user sentiment metrics (col.26, ll.59-col.27, ll.5). Regarding claim 2, Muhsin discloses a system as claimed in claim 1, wherein the characteristic of a representation comprises one or more of: a visual characteristic (visual prominence) comprising one or more of: a proportion of the display area occupied by the representation, wherein the proportion of the display area occupied by the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted proportions of the display area occupied by each representation of the plurality of user sentiment metrics (col.26, ll.35-50 size and/or brightness adjusted based on magnitude, e.g. frequency of use); a brightness of the representation, wherein the brightness of the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted brightness of each representation of the plurality of user sentiment metrics (col.26, ll.35-50 size and/or brightness adjusted based on magnitude, e.g. frequency of use); a hue of the representation, wherein the hue of the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted hue of each representation of the plurality of sentiment metrics (this is an alternative limitation that does not need to be taught based on “one or more of” limitation); and a saturation of the representation, wherein the saturation of the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted saturation of each representation of the plurality of user sentiment metrics (this is an alternative limitation that does not need to be taught based on “one or more of” limitation); and/or an audible characteristic comprising one or more of: a volume of an audible signal associated with the representation, wherein the volume of the audible signal associated with the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted volumes associated with each representation of the plurality of sentiment metrics; and a tone of an audible signal as associated with the representation, wherein the tone of the audible signal associated with the representation of a user sentiment metric represents the magnitude of the user sentiment metric, and wherein generating the user profile is based on the adjusted tones associated with each representation of the plurality of sentiment metrics. (The “audible characteristic” limitation is an alternative limitation to “visual characteristic” that does not need to be taught based on “one or more of” limitation) Regarding claim 3, Muhsin discloses a system as claimed in claim 1, wherein the processing unit is further adapted to: determine a position within the display area of the display unit for each representation of the plurality of user sentiment metrics; and control the display unit to display each representation at the determined position. (see Figs. 11D-11H) Regarding claim 4, Muhsin discloses a system as claimed in claim 3, wherein, for each instance of displaying the representation of each of the plurality of user sentiment metrics, determining the position within the display area comprises selecting a random position within the display area for each representation. (see Figs.11D-11H) Regarding claim 5, Muhsin discloses a system as claimed in claim 3, wherein the processing unit is further adapted to: determine a direction of approach from a starting position within the display area to the determined position for each representation of the plurality of user sentiment metrics; and control the display unit to display each representation moving along the direction of approach to the determined position (see Figs.11D-11H starting position of the display is inherent); and optionally wherein, for each instance of displaying the representation of each of the plurality of user sentiment metrics, determining the direction of approach comprises selecting a random starting position within the display area for each representation. (This limitation that is not required to be taught based on “optionally” limitation) Regarding claim 6, Muhsin discloses a system as claimed in claim 3, wherein the user input further comprises an adjustment of a position of a representation of a user sentiment metric, and wherein the generation of the user profile is further based on the adjustment to the position of the representation. (adjustment based on user’s input, i.e. the user’s frequency and personalized associated value 1139 of selecting a particular sentiment metric 1138; see col.26, ll.44-46 and ll.54-58) Regarding claim 7, Muhsin discloses a system as clamed in claim 1, wherein the user sentiment metrics comprise continuous metrics. (col.6, ll.20-22 continuous stream of physiological information, also see col.9, ll.25-32) Regarding claim 8, Muhsin discloses a system as claimed in claim 1, wherein the processing unit is further adapted to perform one or more of: add an additional user sentiment metric (1138) to the plurality of user sentiment metrics based on a user input; and remove a user sentiment metric from the plurality of user sentiment metrics based on a user input. (clinician user able to add and/or remote a sentiment metric, see col.26, ll.25-35 clinician assigns a “preset clinician notification inputs 1138”) Regarding claim 9, Muhsin discloses a system as claimed in claim 1, wherein a user sentiment metric (1138; i.e. pain) of the plurality of user sentiment metrics further comprises a secondary user sentiment metric (selectable pain scale), wherein the secondary user sentiment metric defines a sub-class of the user sentiment metric. (Figs. 11E and 11F, and see col.26, ll.51-col.27, ll.5, “the patient personalized the preset “pain” notification message by selecting a pain level of “4.””) Regarding claim 10, Muhsin discloses a system as claimed in claim 1, wherein the user profile comprises a record of the plurality of user sentiment metrics over time. (col.23, ll.50-60, also see col.15, ll.4-13) Regarding claim 11, Muhsin discloses a system as claimed in claim 10, wherein the processing unit is further adapted to automatically adjust the characteristic of a representation of a user sentiment metric if the record of the plurality of user sentiment metrics over time indicates that the user has not provided an input relating to said user sentiment metric for a predetermined period of time. (col.26, ll.35-50 since sentiment metric 1138 is adjusted in size and/or brightness based on frequency of use, this inherently provides that when a certain sentiment metric is not frequently selected or has not been selected in a while, its representation 1138 icon bubble would automatically decrease in size and/or brightness) Regarding claim 12, Muhsin discloses a system as claimed in claim 1, wherein the processing unit is further adapted to record an order in which the user provides a user input relating to each of the plurality of user sentiment metrics displayed on the display unit, thereby obtaining a user sentiment metric interaction hierarchy, and wherein the user profile further comprises the user sentiment metric interaction hierarchy. (col.26, ll.35-50 input hierarchy is inherent based on recording frequency of use/selecting each sentiment metric 1138 and also see col.26, ll.51-58 user’s input of association value 1139 also affects the hierarchy of each sentiment metric 1138) Regarding claim 13, Muhsin discloses a system as claimed in claim 1, wherein the user input relating to a user sentiment metric comprises a plurality of adjustments, and wherein the processing unit is further adapted to record the plurality of adjustments, thereby obtaining an adjustment profile for the user sentiment metric, and wherein the user profile further comprises the adjustment profile. (col.26, ll.51-58 user’s input of association value 1139 affects the preset notification input 1138 profile, such that the association values for each 1138 are used to create a personalized profile such as shown in Fig. 11E) Regarding claim 14, Muhsin discloses a system as claimed in claim 1,wherein a first user sentiment metric of the plurality of user sentiment metrics comprises a correlation relationship with a second user sentiment metric (e.g. pain, and/or pain medication) of the plurality of user sentiment metrics, and wherein the processing unit is adapted to, when adjusting the characteristic of a representation of a user sentiment metric based on the user input, adjust the characteristic of the first user sentiment metric based on the user input and adjust the characteristic of the second user sentiment metric based on the correlation relationship with the first user sentiment metric. (col.26, ll.32-35 “Some of the notification messages can also correspond to requests for pain medication, attention to a device in the patient's room which is beeping or otherwise indicating the need for attention, etc.”. Also see col.26, ll.59-col.27, ll.5 adjusting the representation of the graphic, e.g. pain arc, based on a selected pain value) Regarding claim 15, Muhsin discloses a system as claimed in claim 14, wherein the processing unit is further adapted to alter the correlation relationship based on a user input. (see rejection to claim 14 above, and see Fig. 11F) Regarding claim 16, Muhsin discloses a system as claimed in claim 1, wherein the system further comprises a sensor (patient monitoring device 1000 is connected with patient communication device 1100 as shown in Fig. 11A) adapted to obtain sensor data from the user (see col.23, ll.60-col.25, ll.52 discloses various sensor data including physiological sensors and camera; also see col.26, ll.33 pain and pain medication), and wherein generating the user profile is further based on the sensor data (Fig.11E pain value and request for pain medication are associated with physiological sensor data), and optionally wherein the sensor comprises one or more of: a motion senor; a light sensor; a sound sensor; a heart rate sensor; an SpO2 sensor; a temperature sensor; a blood sugar sensor; a hydration sensor; and a weight sensor. (This limitation that is not required to be taught based on “optionally” limitation. Alternatively, see col.4, ll.9-65 discloses a plurality of physiological sensor for monitoring a user) Regarding claim 17, Muhsin discloses a system as claimed in claim 16, wherein a first user sentiment metric of the plurality of user sentiment metrics comprises a correlation relationship with a second user sentiment metric of the plurality of user sentiment metrics, and wherein the processing unit is adapted to, when adjusting the proportion of the display area occupied by a representation of a user sentiment metric based on the user input, adjust the proportion of the first user sentiment metric based on the user input and adjust the proportion of the second user sentiment metric based on the correlation relationship with the first user sentiment metric, and wherein the processing unit is further adapted to alter the correlation relationship based on the sensor data. (col.26, ll.51-58 user’s input of association value 1139 affects the preset notification input 1138 profile, such that the association values for each 1138 are used to create a personalized profile such as shown in Fig. 11E. Alternatively, col.26, ll.32-35 “Some of the notification messages can also correspond to requests for pain medication, attention to a device in the patient's room which is beeping or otherwise indicating the need for attention, etc.” Also see col.26, ll.59-col.27, ll.5 adjusting the representation of the graphic, e.g. pain arc, based on a selected pain value) Regarding claim 18, Muhsin discloses a system as claimed in claim 1, wherein the processing unit is further adapted to obtain environmental data (contextual data; see Fig. 4, and col.6, ll.17-22, col.9, ll.22-32) relating to the user's environment, and wherein generating the user profile is further based on the environmental data (Fig. 11D: room clean up, device beeping are based on contextual data), and optionally wherein the environmental data comprises one or more of: geographical data; elevation data; weather data; pollen count data; humidity data; temperature data; pressure data; air pollution data; water pollution data; light pollution data; noise pollution data; and UV index data. (This limitation that is not required to be taught based on “optionally” limitation) Regarding claim 19, Muhsin discloses a system as claimed in claim 1, wherein the user input comprises one or more of: a hand gesture performed by the user (col.24, ll.5-19 camera is capable detecting hand gesture as an input; also col.25, ll.5-11 1100 has a touchscreen which is capable of detecting a common touchscreen operating hand gestures, e.g. tap, swipe, etc.), and optionally wherein the display unit and the user interface are incorporated into one or more of: a touch screen unit; an augmented reality unit; or a virtual reality unit; or an eye movement performed by the user, wherein the system further comprises a camera adapted to capture image data of an eye of the user, and optionally wherein the display unit and the user interface are incorporated into one or more of: a touch screen unit; an augmented reality unit; or a virtual reality unit. (This limitation that is not required to be taught based on “optionally” limitation) Regarding claim 20, Muhsin discloses a system as claimed in claim 1, wherein the processing unit is further adapted to generate a prompt to be provided to the user to encourage the user to provide a user input (see col.26, ll.8-22 and Fig. 11D: question reminder module 1136 allow a user to input their input), and optionally wherein the processing unit is adapted to generate the prompt at randomized intervals (this limitation that is not required to be taught based on “optionally” limitation). Regarding claim 21, Muhsin discloses a system as claimed in claim 1, wherein the user profile further comprises user demographic data. (col.16, ll.39-42 patient demography) Regarding claim 22, Muhsin discloses a distributed system (Fig.6) for obtaining a plurality of user profiles (profiles associated with each of a plurality of patients, shown in Fig. 6 as a plurality of patient monitors) comprising a plurality of user sentiment metrics (each patient monitor 1100 having a plurality of user sentiment metrics as shown in Figs.11D-11H), the distributed system comprising: a plurality of systems (1100, in communication to patient monitors 640, note: see Fig.11A illustrating both communication device 1100 and monitoring device 1000 for each patient user) as claimed in claim 1, each system being associated with an individual user (Fig. 11A), and wherein each of the plurality of systems further comprises a communications unit; a remote processing unit (multi-patient monitoring system/MMS 620) in communication with the communication units of the plurality of systems (see Figs. 6 and 7), wherein the remote processing unit is adapted to: obtain a plurality of user profiles from the plurality of systems; and generate a community profile based on the plurality of user profiles, the community profile comprising a plurality of community sentiment metrics generated based on the plurality of user sentiment metrics of the plurality of user profiles. (This is taught implicitly see col.26, ll.39-41 “For example, a clinician notification input 1138 that is used most frequently by the patient, or by a selected group of patients, can be displayed with the largest size”, sentiment metrics 1138 which are used most frequently by a group of patients are displayed at largest size, this evidences that multiple user’s selections aggregated to create a “community profile. Alternatively, also see limitation in col.30, ll.8-10 Claim 1-last clause) Regarding claim 23, Muhsin discloses a distributed system as claimed in claim 22, wherein the remote processing system (MMS 620) is further adapted to perform one or more of: link a user profile to the community profile if the plurality of user sentiment metrics of the user profile match the plurality of community sentiment metrics within a predetermined tolerance; (col.8, ll.26-32 associating a particular user’s context information with other context information with one or more other devices, also see col.9, ll.18-32, and col.26, ll.51-59 sentiment metrics associated values provided by each individual user) where the user profile comprises user demographic data and the community profile further comprises community demographic data, link a user profile to the community profile if the user demographic data matches the community demographic data within a predetermined tolerance; (This limitation is an alternative limitation that does not need to be taught based on “one or more of” limitation) where the user profile comprises environmental data and the community profile further comprises community environmental data, link a user profile to the community profile if the environmental data matches the community environmental data within a predetermined tolerance (This limitation is an alternative limitation that does not need to be taught based on “one or more of” limitation); and where the user profile comprises sensor data and the community profile further comprises community sensor data, link a user profile to the community profile if the sensor data matches the community sensor data within a predetermined tolerance (This limitation is an alternative limitation that does not need to be taught based on “one or more of” limitation); and optionally wherein, the processing unit of each system associated with a user profile linked with a community profile is adapted to cause the display unit to display a representation of the plurality of community sentiment metrics. (This limitation that is not required to be taught based on “optionally” limitation) Regarding claim 24, Muhsin discloses a method for profiling a user based on a plurality of user sentiment metrics, the method comprising: displaying a representation (1138 bubble icons) of each of the plurality of user sentiment metrics (notification inputs 1138 representing a user’s needs and feelings) to the user by way of a display unit (patient information device 1100, e.g. tablet computer has a display), wherein each representation (1138) occupies a proportion of a display area of the display unit (see Figs. 11D-11H); receiving a user input (as shown in Fig. 11A receiving input via 1100 which has a display/output and input, e.g. touchscreen, keyboard, mouse etc., col.25, ll.5-11); adjusting a characteristic of the representation of a user sentiment metric (1138) based on the user input (visual prominence, e.g. size, brightness, etc. selected by a user), and wherein the characteristic of the representation of a user sentiment metric represents the magnitude of the user sentiment metric (col.26, ll.35-50 adjust size of each of 1138 based on frequency of use of each metric as selected by a user or group of common users); and generating a user profile (Figs. 11E-11F is a profile personalized by a user) based on the adjusted characteristic of each representation of the plurality of user sentiment metrics (col.26, ll.59-col.27, ll.5). Regarding claim 25, Muhsin discloses a computer program comprising computer program code means which is adapted, when said computer program is run on a computer, to implement the method of claim 24. (See col.28, ll.24-28 system hardware and software) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Tarnok US 2013/0096819 Fig. 2 teaches a mobile device in which icons associated with different applications are adjusted to varying sizes based on frequency of use of said applications. PNG media_image2.png 468 347 media_image2.png Greyscale Aimone et al. US 2014/0223462 A1 Fig. 5 illustrates a personalized profile of sentiment metrics in which each sentiment metric is proportioned in size and adjusted/positioned according to user input PNG media_image3.png 244 501 media_image3.png Greyscale Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIRLEY X JIAN whose telephone number is (571)270-7374. The examiner can normally be reached M-F 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Klein can be reached at 571-270-5213. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIRLEY X JIAN/ Primary Examiner, Art Unit 3792 October 18, 2025
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599321
DEVICES AND METHODS FOR ASSESSING PULMONARY STATUS USING OPTICAL OXYGENATION SENSING
2y 5m to grant Granted Apr 14, 2026
Patent 12594423
Occipital Lobe Stimulation Device
2y 5m to grant Granted Apr 07, 2026
Patent 12597514
WEARABLE SENSOR AND SYSTEM THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12588855
DETERMINATION METHOD AND DETERMINATION APPARATUS FOR BEGINNING OF T-WAVE, STORAGE MEDIUM AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 31, 2026
Patent 12582314
SENSING DEVICE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
86%
With Interview (+23.9%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 734 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month