Prosecution Insights
Last updated: April 19, 2026
Application No. 18/426,102

APPARATUS AND METHOD OF DETERMINING A CONDITIONAL PROFILE ADJUSTMENT DATUM

Non-Final OA §103
Filed
Jan 29, 2024
Examiner
ISMAIL, OMAR S
Art Unit
2635
Tech Center
2600 — Communications
Assignee
Kpn Innovations LLC
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
734 granted / 802 resolved
+29.5% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
24 currently pending
Career history
826
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
66.3%
+26.3% vs TC avg
§102
7.0%
-33.0% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 802 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED OFFICE ACTION Status of Claims Claims 1-20 are pending examination. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b) (2) (C) for any potential 35 U.S.C. 102(a) (2) prior art against the later invention. 1. Claims 1,2,4,5,6,7,8,10,11,12,14,15,16,17,18 and 20 are rejected under 35 U.S.C 103(a) as being unpatentable over KIM (USPUB 20150261996) in view of YULIANG LIU et al. (NPL DOC: " Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning," 1st March 2019, IEEEAccess, Volume 7,2019, Pages 22002 - 22007.) As per claim 1, KIM teaches An apparatus for determining a conditional profile adjustment datum ( Conditional profile adjustment interpreted as Health status information and biological condition taught within FIG. 5 and Paragraph [0033]- “…. detecting bio-information of the user at a point of time when the second image is captured, wherein the extracting the second health status information may include: determining a biological condition of the user at the point of time when the second image is captured based on the detected bio-information; and excluding some of the second facial condition information, which is shown on the face of the user due to the biological condition.….”) , the apparatus comprising: at least a processor (Paragraph [0740]-“… The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes, …”) ; and a memory connectively connected to the at least a processor (Paragraph [0735]-“… [0735] The CPU 173 accesses the memory 120 and performs booting by using an operating system (OS) stored in the memory 120. Also, the CPU 173 performs various operations by using various programs, contents, and data stored in the memory 120.”), the memory containing instructions configuring the at least a processor ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”) to: receive a first plurality of photographs related to a human subject ( Images /Photographs taught within FIG. 8 and Paragraph [0009]- “…health status information obtainable by easily collecting face images…”) ; identify a first conditional indicator as a function of the first plurality of photographs and entries contained within an expert database ( Paragraphs [0139-0141]- “…he device 100 may obtain the health status information 40 indicating a health status of the user by using the facial condition information. One or more exemplary embodiments of the device 100 obtaining the health status information 40 by using the facial condition information will be described in detail below with reference to FIGS. 37 through 44…”) ; KIM does not explicitly teach generate a first conditional profile by: training a classifier on a training dataset including a plurality of example conditional indicators as inputs correlated to a plurality of example conditional profiles as outputs; and generating the first conditional profile as a function of the first conditional indicator using the trained classifier; determine a conditional profile adjustment datum as a function of the first conditional profile; and communicate the conditional profile adjustment datum to the human subject. However, within analogous art, YULIANG LIU et al. teaches generate a first conditional profile ( Page 22003- Col. 1- “… Different physiological parameters are often obtained by different methods, it can reflect physical condition of human…Hematology parameters are now one of the criteria for the diagnosis and treatment of diabetes and its complications in the worldwide …And urine parameters report metabolic status of human…Therefore, hematological parameters and urinary parameters are used to determine healthy condition of human.….”) by: training a classifier on a training dataset including a plurality of example conditional indicators as inputs correlated to a plurality of example conditional profiles as outputs ( Classification and training of dataset taught within Page 22003- FIGURE 1 and Col. 1- “… the model cannot learn features enough and classify samples precisely. The other problem is that the model learning features excessively leads poor generalization ability on validation dataset. One method of addressing the lack of data in a given domain is using extended data to supplement raw dataset known as expanding learning. Expanding learning algorithm has been proved to be an effectively method [17], particularly when it faces with domains with limited data [18]. Rather than training a completely blank traditional neural network by complex coding to find relationship between input and output,…”) ; and generating the first conditional profile as a function of the first conditional indicator using the trained classifier ( Page 22006- Col. 1- “…The condition that one person has multiple diseases at the same time was also considered in this paper. In this condition, the labels were made by permutation and combination of diseases, such as the sample of Diabetes Circulatory Complication labeled 1, the sample of patient at the same time suffering from diseases Diabetes Circulatory Complication and Diabetic Peripheral Neuropathy labeled 2 and so on in order to diagnosis comprehensive illness. When judging multiple diseases at the same time, the task of classification layer is to classify multiple situations….”) ; determine a conditional profile adjustment datum as a function of the first conditional profile ( Page 22006-Col. 1- “…The condition that one person has multiple diseases at the same time was also considered in this paper. In this condition, the labels were made by permutation and combination of diseases,…”) ; and communicate the conditional profile adjustment datum to the human subject ( Page 22006- Col. 2- “…Under this condition, this AI system can conduct preliminary referrals decisions of patients to save medical resources…”). One of ordinary skill in the art would have been motivated to combine the teaching of YULIANG LIU et al. within the modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. because the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. provides a method and system for implementation of clinical diagnostics of human health based on medical image data. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. within the modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. for implementation of clinical diagnostics of human health based on medical image data. As per claim 2, Combination of KIM and YULIANG LIU et al. teaches claim 1, KIM teaches wherein the memory contains instructions configuring the at least processor( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”) to: communicate a feedback prompt to human subject; and receive feedback from the human subject( Paragraphs [0039-0040]- “… a user interface unit configured to receive a user input for executing a video call application in the device, wherein the controller may be configured to, when the video call application is executed according to the received user input, control the imager to capture the second image, and to extract the second health status information from the captured second image….”) . As per claim 4, Combination of KIM and YULIANG LIU et al. teaches claim 1, KIM teaches wherein the memory contains instructions configuring the at least processor to communicate ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”)to the human subject a medical professional identifier ( Paragraph [0509]- “… the process of obtaining the health status information may be performed by the service server 1000 communicating with the device 100. The service server 1000 may be, for example, a server that provides a service of obtaining health status information, and may be a medical service server, an application server, or a website server.”) . As per claim 5, Combination of KIM and YULIANG LIU et al. teaches claim 1, KIM teaches wherein the memory contains instructions configuring the at least processor ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”) to determine a conditional status of the human subject as a function of the first conditional profile, wherein the conditional profile adjustment datum identifies a therapy associated with the conditional status ( Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user. Also, the third party server 2000c may be a server for recommending exercises suitable to the user.”) . As per claim 6, Combination of KIM and YULIANG LIU et al. teaches claim 5, KIM teaches wherein the memory contains instructions configuring the at least processor ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”)to display to the human subject a visual element as a function of the therapy associated with the conditional status ( Paragraph [0089]- “ FIG. 40 is a diagram for describing a method of displaying photographing circumstance information obtained while capturing an image, together with health status information of a user,…” AND Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user….”) . As per claim 7, Combination of KIM and YULIANG LIU et al. teaches claim 5, KIM teaches wherein the memory contains instructions configuring the at least processor to communicate to the human subject identity information of an entity providing the therapy( Paragraph [0089]- “ FIG. 40 is a diagram for describing a method of displaying photographing circumstance information obtained while capturing an image, together with health status information of a user,…” AND Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user….”) . As per claim 8, Combination of KIM and YULIANG LIU et al. teaches claim 1, KIM teaches wherein the first plurality of photographs is received from a computing device associated with a social networking platform ( Paragraph [0108]- “ FIG. 54 is a diagram for describing a method of displaying, by a device, health status information of a user when a social network application is executed,…” AND Paragraph [0303]- “…receiving a user input of selecting a photograph, the device 100 may download the selected photograph from an external server. The external server may be a social network service (SNS) server,…”) . As per claim 10, Combination of KIM and YULIANG LIU et al. teaches claim 1, KIM teaches wherein the first conditional indicator is identified as a function of metadata associated with the first plurality of photographs ( Paragraphs [0609-0611]- “… device 100 may determine a reference image according to longitude and latitude, based on longitude and latitude stored as metadata in an image file…”) . As per claim 11, KIM teaches A method of determining a conditional profile adjustment datum ( Conditional profile adjustment interpreted as Health status information and biological condition taught FIG. 5 and Paragraph [0033]- “…. detecting bio-information of the user at a point of time when the second image is captured, wherein the extracting the second health status information may include: determining a biological condition of the user at the point of time when the second image is captured based on the detected bio-information; and excluding some of the second facial condition information, which is shown on the face of the user due to the biological condition.….”), the method comprising: using at least a processor (Paragraph [0740]-“… The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes, …”), receiving a first plurality of photographs related to a human subject ( Images /Photographs taught within FIG. 8 and Paragraph [0009]- “…health status information obtainable by easily collecting face images…”); using the at least a processor, identifying a first conditional indicator as a function of the first plurality of photographs and entries contained within an expert database ( Paragraphs [0139-0141]- “…he device 100 may obtain the health status information 40 indicating a health status of the user by using the facial condition information. One or more exemplary embodiments of the device 100 obtaining the health status information 40 by using the facial condition information will be described in detail below with reference to FIGS. 37 through 44…”); using the at least a processor(Paragraph [0740]-“… The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes, …”), KIM does not explicitly teach generating a first conditional profile by: training a classifier on a training dataset including a plurality of example conditional indicators as inputs correlated to a plurality of example conditional profiles as outputs; and generating the first conditional profile as a function of the first conditional indicator using the trained classifier; using the at least a processor, determining a conditional profile adjustment datum as a function of the first conditional profile; and using the at least a processor, communicating the conditional profile adjustment datum to the human subject. However, within analogous art, YULIANG LIU et al. teaches generating a first conditional profile ( Page 22003- Col. 1- “… Different physiological parameters are often obtained by different methods, it can reflect physical condition of human…Hematology parameters are now one of the criteria for the diagnosis and treatment of diabetes and its complications in the worldwide …And urine parameters report metabolic status of human…Therefore, hematological parameters and urinary parameters are used to determine healthy condition of human.….”) by: training a classifier on a training dataset including a plurality of example conditional indicators as inputs correlated to a plurality of example conditional profiles as outputs ( Classification and training of dataset taught within Page 22003- FIGURE 1 and Col. 1- “… the model cannot learn features enough and classify samples precisely. The other problem is that the model learning features excessively leads poor generalization ability on validation dataset. One method of addressing the lack of data in a given domain is using extended data to supplement raw dataset known as expanding learning. Expanding learning algorithm has been proved to be an effectively method [17], particularly when it faces with domains with limited data [18]. Rather than training a completely blank traditional neural network by complex coding to find relationship between input and output,…”) ; and generating the first conditional profile as a function of the first conditional indicator using the trained classifier ( Page 22006- Col. 1- “…The condition that one person has multiple diseases at the same time was also considered in this paper. In this condition, the labels were made by permutation and combination of diseases, such as the sample of Diabetes Circulatory Complication labeled 1, the sample of patient at the same time suffering from diseases Diabetes Circulatory Complication and Diabetic Peripheral Neuropathy labeled 2 and so on in order to diagnosis comprehensive illness. When judging multiple diseases at the same time, the task of classification layer is to classify multiple situations….”) ; using the at least a processor, determining a conditional profile adjustment datum as a function of the first conditional profile ( Page 22006-Col. 1- “…The condition that one person has multiple diseases at the same time was also considered in this paper. In this condition, the labels were made by permutation and combination of diseases,…”) ; a and using the at least a processor, communicating the conditional profile adjustment datum to the human subject( Page 22006- Col. 2- “…Under this condition, this AI system can conduct preliminary referrals decisions of patients to save medical resources…”). One of ordinary skill in the art would have been motivated to combine the teaching of YULIANG LIU et al. within the modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. because the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. provides a method and system for implementation of clinical diagnostics of human health based on medical image data. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. within the modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. for implementation of clinical diagnostics of human health based on medical image data. As per claim 12, Combination of KIM and YULIANG LIU et al. teaches claim 11, KIM teaches wherein the method further comprises: using the at least a processor( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”), communicating a feedback prompt to human subject; and using the at least a processor, receiving feedback from the human subject( Paragraphs [0039-0040]- “… a user interface unit configured to receive a user input for executing a video call application in the device, wherein the controller may be configured to, when the video call application is executed according to the received user input, control the imager to capture the second image, and to extract the second health status information from the captured second image….”) . As per claim 14, Combination of KIM and YULIANG LIU et al. teaches claim 11, KIM teaches wherein the method further comprises, using the at least a processor, communicating ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”) to the human subject a medical professional identifier( Paragraph [0509]- “… the process of obtaining the health status information may be performed by the service server 1000 communicating with the device 100. The service server 1000 may be, for example, a server that provides a service of obtaining health status information, and may be a medical service server, an application server, or a website server.”) . As per claim 15, Combination of KIM and YULIANG LIU et al. teaches claim 11, KIM teaches wherein the method further comprises, using the at least a processor( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”), determining a conditional status of the human subject as a function of the first conditional profile, wherein the conditional profile adjustment datum identifies a therapy associated with the conditional status( Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user. Also, the third party server 2000c may be a server for recommending exercises suitable to the user.”) . As per claim 16, Combination of KIM and YULIANG LIU et al. teaches claim 15, KIM teaches wherein the method further comprises, using the at least a processor ( Paragraph [0740]- “…The video processor 135 may process video data included content received through the communication unit 130 or included in content stored in the memory 120. The video processor 135 may perform various image processes …”), displaying to the human subject a visual element as a function of the therapy associated with the conditional status( Paragraph [0089]- “ FIG. 40 is a diagram for describing a method of displaying photographing circumstance information obtained while capturing an image, together with health status information of a user,…” AND Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user….”) . As per claim 17, Combination of KIM and YULIANG LIU et al. teaches claim 15, KIM teaches wherein the method further comprises, using the at least a processor, communicating to the human subject identity information of an entity providing the therapy( Paragraph [0089]- “ FIG. 40 is a diagram for describing a method of displaying photographing circumstance information obtained while capturing an image, together with health status information of a user,…” AND Paragraph [0664]- “…the third party server 2000a may be a server providing information about life habits suitable to the user. The third party server 2000b may be a server providing a diet therapy suitable to the user….”) . As per claim 18, Combination of KIM and YULIANG LIU et al. teaches claim 11, KIM teaches wherein the first plurality of photographs is received from a computing device associated with a social media site( Paragraph [0108]- “ FIG. 54 is a diagram for describing a method of displaying, by a device, health status information of a user when a social network application is executed,…” AND Paragraph [0303]- “…receiving a user input of selecting a photograph, the device 100 may download the selected photograph from an external server. The external server may be a social network service (SNS) server,…”) . As per claim 20, Combination of KIM and YULIANG LIU et al. teaches claim 11, KIM teaches wherein the first conditional indicator is identified as a function of metadata associated with a photograph of the first plurality of photographs ( Paragraphs [0609-0611]- “… device 100 may determine a reference image according to longitude and latitude, based on longitude and latitude stored as metadata in an image file…”) . 2. Claims 9 and 19 are rejected under 35 U.S.C 103(a) as being unpatentable over KIM (USPUB 20150261996) in view of YULIANG LIU et al. (NPL DOC: " Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning," 1st March 2019, IEEEAccess, Volume 7,2019, Pages 22002 - 22007.) in further view of Kurtz et al. (USPUB 20080297586). As per claim 9, Combination of KIM and YULIANG LIU et al. teaches claim 1, Combination of KIM and YULIANG LIU et al. does not explicitly teach wherein the first plurality of photographs is received from a computing device associated with a metaverse. Within analogous art, Kurtz et al. teaches wherein the first plurality of photographs is received from a computing device associated with a metaverse ( Paragraphs [0158-0159]). One of ordinary skill in the art would have been motivated to combine the teaching of Kurtz et al. within the combined modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. and the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. because the Personal controls for personal video communications mentioned by Kurtz et al. provides a method and system for implementation of video characteristic modification from captured video images. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Personal controls for personal video communications mentioned by Kurtz et al. within the combined modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. and the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. for implementation of video characteristic modification from captured video images. As per claim 19, Combination of KIM and YULIANG LIU et al. teaches claim 11, Combination of KIM and YULIANG LIU et al. does not explicitly teach wherein the first plurality of photographs is received from a computing device associated with a metaverse. Within analogous art, Kurtz et al. teaches wherein the first plurality of photographs is received from a computing device associated with a metaverse( Paragraphs [0158-0159]). One of ordinary skill in the art would have been motivated to combine the teaching of Kurtz et al. within the combined modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. and the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. because the Personal controls for personal video communications mentioned by Kurtz et al. provides a method and system for implementation of video characteristic modification from captured video images. Therefore, it would have been obvious for one in the ordinary skills in the art before the effective filing date of the claimed invention to implement the Personal controls for personal video communications mentioned by Kurtz et al. within the combined modified teaching of the Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium mentioned by KIM et al. and the Detecting Diseases by Human-Physiological-Parameter-Based Deep Learning mentioned by YULIANG LIU et al. for implementation of video characteristic modification from captured video images. It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123. Allowable Subject Matter 3. Claims 3 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 4. The following is an examiner’s statement of reasons for objecting the claims as allowable subject matter: As to claim 3, prior art of record does not teach or suggest the limitation mentioned within claim 3: “…receive a second plurality of photographs related to the human subject; identify a second conditional indicator as a function of the second plurality of photographs and entries contained within the expert database; generate a second conditional profile as a function of the second conditional indicator using the trained classifier; and determine a profile velocity datum by comparing the second conditional profile with the first conditional profile. ” As to claim 13, prior art of record does not teach or suggest the limitation mentioned within claim 13: “… receiving a second plurality of photographs related to the human subject; using the at least a processor, identifying a second conditional indicator as a function of the second plurality of photographs and entries contained within the expert database; using the at least a processor, generating a second conditional profile as a function of the second conditional indicator using the trained classifier; and using the at least a processor, determining a profile velocity datum by comparing the second conditional profile with the first conditional profile.” Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art. 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMAR S ISMAIL whose telephone number is (571)272-9799 and Fax # is (571)273-9799. The examiner can normally be reached on M-F 9:00am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David C. Payne can be reached on (571) 272-3024. The fax phone number for the organization where this application or proceeding is assigned is (571)273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OMAR S ISMAIL/ Primary Examiner, Art Unit 2635
Read full office action

Prosecution Timeline

Jan 29, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603705
LATENCY EQUALIZATION FOR OPTICAL FILTER
2y 5m to grant Granted Apr 14, 2026
Patent 12596911
METHOD AND APPARATUS WITH NEURAL NETWORK CONTROL
2y 5m to grant Granted Apr 07, 2026
Patent 12594391
MODEL-GUIDED IMAGING FOR MECHANICAL VENTILATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586365
OBJECT CLASSIFICATION USING MULTIPLE LABELS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586359
SYNTHETIC-TO-REALISTIC IMAGE CONVERSION USING GENERATIVE ADVERSARIAL NETWORK (GAN) OR OTHER MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+9.7%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 802 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month