Prosecution Insights
Last updated: April 19, 2026
Application No. 18/980,664

INTERACTIVE ORAL CAVITY PHOTOGRAPHY SYSTEM AND ARTIFICIAL INTELLIGENCE IMAGE RECOGNITION ORAL CAVITY CANCER

Non-Final OA §101§103§112
Filed
Dec 13, 2024
Examiner
GROSS, JASON PATRICK
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
National Health Research Institutes
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
9 granted / 14 resolved
-5.7% vs TC avg
Strong +62% interview lift
Without
With
+62.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
34 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 6 and 11-14 objected to because of the following informalities: Claim 6 appears to be written in an independent form, yet also refers back to the other independent claim 1. In an interpretation, claim 6 may be construed as an independent claim; and in another interpretation it may also be construed as a dependent claim. In order to prevent any foreseeable ambiguity, it is suggested to bring the entire claim 1 in to the claim 6 to have the claim construed as a proper independent claim; or, correct the dependency of the claim 6 (as shown in other depending claims e.g., claim 2, etc.) to have the claim construed as a proper dependent claim. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Guiding module in claim 1; image capture unit in claim 1; artificial intelligence graphic recognition module in claim 1; Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. With respect to a “guiding module,” this element is being interpreted under 35 U.S.C. 112(f). However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed function. In particular, the specification states the claimed function of guiding is performed by providing a schematic image. (see, e.g., [0049]). However, there is no disclosure of any particular structure, either explicitly or inherently, to perform these steps. Based on the description at [0049], Examiner suspects that the guiding module is a software-based module that provides the user interface. However, the specification does not provide sufficient details such that one of ordinary skill in the art would understand which structures perform(s) the claimed function. With respect to “image capture unit,” this element is being interpreted under 35 U.S.C. 112(f). The corresponding structure described in the specification as performing the claimed function is a camera of a smartphone and DSLR cameras (see [0011] and [0056]). With respect to an “artificial intelligence graphic recognition module,” this element is being interpreted under 35 U.S.C. 112(f). The corresponding structure described in the specification as performing the claimed function is a YOLOv7 and neural networks (see [0056]). If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim limitation “guiding module” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification is devoid of adequate structure to perform the claimed function. In particular, the specification states the claimed function is performed by providing a reference schematic image. There is no disclosure of any particular structure, either explicitly or inherently, to perform these steps. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-14 have been rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above, the disclosure does not provide adequate structure to perform the claimed function of guiding (e.g., a workflow). The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite or similarly recite: [a] an artificial intelligence graphic recognition module, communicatively connected to the image capture unit and configured to receive the at least two oral mucosa images and generate a result through a graphic recognition algorithm; (claims 1, 6, 11-14) [b] a risk warning step: providing warnings of different color lights according to the results to correspond to a risk level of oral cavity cancer, the different color lights at least include a green light, a yellow light and a red light, the green light indicates that the risk level is low risk, the yellow light indicates that the risk level is medium risk, and the red light indicates that the risk level is high risk; (claim 7) Claim limitation [a], as drafted and under its broadest reasonable interpretation, recites a mathematical concept. Claim limitation [a] recites a mathematical concept because it involves mathematical calculations. (MPEP 2106.04(a)(2), I, C). The MPEP provides several examples of mathematical calculations, which include: “performing a resampled statistical analysis to generate a resampled distribution.” In this case, an artificial intelligence graphic recognition module (e.g., neural network) involves applying a sequence of mathematical transforms to the input images and then feature extraction from the images. (see also the recently decided Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437, Federal Circuit, 18 April 2025: “[C]laims that do no more than apply established methods of machine learning to a new data environment are patent eligible.” Claim limitation [b], as drafted and under its broadest reasonable interpretation, recites a mathematical concept. Claim limitation [b] recites a mathematical concept because it involves mathematical calculations. (MPEP 2106.04(a)(2), I, C). (see, e.g., Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014) (although the claims did not recite a particular mathematical formula, the court held “[w]ithout additional limitations, a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.”)). In this case, providing warnings of different colors first involves assessing the probabilities of different regions being cancerous, which further involves image analysis. With the probabilities understood, colors then need to be assigned to the different probabilities. The next question is to consider whether the claims integrate the judicial exception into a practical application. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception. (MPEP 2106.04(d)). In this case, some additional elements/steps to consider include (1) a guiding module providing a schematic image; (2) an image capture unit; (3) a storage module configured to store images. Here, the judicial exception is not integrated into a practical application. Claim limitation (1) and (3) recite generic computer components that either include instructions to implement the abstract idea on a computer and/or merely use a computer as a tool to perform an abstract idea. (MPEP 2106.04(d)(I), which also refers to MPEP 2106.05(f)). Claim limitation (2) does no more than generally link the judicial exception to a particular technological environment (i.e., imaging lesions). (MPEP 2106.04(d)(I), which also refers to MPEP 2106.05(h)). Moreover, capturing images to be analyzed is insignificant extra-solution activity (i.e., pre-solution activity) that does not impose meaningfully limits on the claim. (MPEP 2106.04(d)(I), which also refers to MPEP 2106.05(g)). The claims do not include additional elements/steps that are sufficient to amount to significantly more than the judicial exception. A shared quality of the additional elements and/or steps is that they do not recite any meaningful limitation that transforms the judicial exception into a patent-eligible application. (MPEP 2106.05(II)). Each of the additional elements is recited at a high-level such that they do not meaningfully limit the claims. (MPEP 2106.05(A)). Moreover, many of these elements (e.g., image capture unit, storage module) are well-understood, routine, conventional activities/elements that are previously known to the industry and specified at a high level of generality such that they do not meaningfully limit the claims. (MPEP 2106.05(A)). (see, e.g., Section 103 rejections below). Accordingly, claims 1 and 7 do not include patent-eligible subject matter. Dependent claims 2-6 and 8-14 also fail to recite patent-eligible subject matter. Claims 2 and 8 recite different locations represented by the schematic images, which does no more than generally link the judicial exception to a particular technological environment (i.e., imaging lesions). Claim 3 recites that the guide line is an outline of the reference schematic image. However, this is not a meaningfully limitation to the claims as the reference schematic image is not specified. Claims 4 and 5 recite that the system is a smartphone and an application is installed on the smartphone. However, this does no more than generally link the judicial exception to a particular technological environment (i.e., imaging lesions). Claims 6 and 11-14 recite identical limitations for method steps, except for their dependencies, which mirror those found in claim 1. For at least the above reasons with respect to claim 1, claims 6 and 11-14 are also patent ineligible. Claims 9 and 10 recite insignificant extra-solution activity, which are considered pre-solution activity (new folder) or post-solution activity (uploading images). (MPEP 2106.04(d)(I), which also refers to MPEP 2106.05(g)). In any case, they do not recite any meaningful limitation that transforms the judicial exception into a patent-eligible application. Accordingly, none of the claims recite patent-eligible subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4-6, 8-11, 13, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. No. 2013/0209954 A1 (hereinafter “PRAKASH”) and U.S. Patent Appl. No. 2023/0237650 A1 (hereinafter “GADIYAR”). PRAKASH concerns techniques for standardized imaging of the oral cavity. (Title). According to PRAKASH, “[e]ven though the oral cavity is widely accessible for examination, no standards exist for comprehensive imaging of the cavity. Techniques are provided for some combination of inexpensive, efficient, comprehensive or standardized imaging of the oral cavity.” ([0003]). PRAKASH further notes that “[a] standardized oral cavity scanner would allow for a quantitative method for clinicians to track the progress of the lesions in individual patients before and during therapy.” ([0036]). PRAKASH emphasizes that “the real technical difficulty in comprehensive imaging of the oral cavity can be experienced simply by trying to take some images of the oral cavity using readily available commercial camera….” ([0038]). To this end, PRAKASH describes an apparatus that includes a bracket having a mouthpiece and a camera mount. (Abstract and Figure 5). PRAKASH also describe a software application that guides the user through a workflow for entering information and capturing the correct images. (see, e.g., [0106]-[0111] and Figures 6A, 6B, 6C, and 7). With respect to claim 1, PRAKASH teaches an interactive oral cavity photography system (see, e.g., camera system in Figure 5 and interactive GUI in Figs. 6A-7 for entering and capturing images). The system comprising: a guiding module (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively). NOTE: As discussed above in the Section 112 rejections, it is not clear what is meant by “guiding module.” However, similar to Applicant’s “guiding step S120” shown in Figs. 2-5, the GUI pages 601-603 in PRAKASH function to guide the user through a workflow). PNG media_image1.png 640 555 media_image1.png Greyscale PNG media_image2.png 396 477 media_image2.png Greyscale [AltContent: rect]an image capture unit ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”), communicatively connected to the guiding module and configured to capture and digitize at least two oral mucosal images ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) (With respect to digitizing the images, see [0082]: ““The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….””); a storage module configured to store the at least two oral mucosa images and the result. ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)) However, PRAKASH does not explicitly teach that the guiding module is configured to provide a reference schematic image of at least two different locations within an oral cavity. PRAKASH also does not explicitly teach that the image capture unit is configured to capture and digitize at least two oral mucosal images of the patient based on a guide line corresponding to the reference schematic image. However, PRAKASH teaches guiding the user to capture predefined images. For example, with respect to Figure 6C (shown above), “[t]he labels 630 a through 630 g indicate multiple locations for frames to capture, which are recommended for producing the standardized image. In the illustrated embodiment, seven locations indicated by the labels 630 a through 630 g are “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively. The toggle buttons are filled, as depicted for toggle button 638 a, as each frame at the corresponding location is captured and stored by the application on the camera, digital camera or camera phone, other equivalent device.” ([0111]). In the same field of endeavor, GADIYAR teaches computer-implemented methods for analyzing an input image of a mouth region from a user to provide information regarding a disease or condition of the mouth region. (Abstract). GADIYAR teaches that it is important to detect any problems early before they become major problems. ([0003]). As such, GADIYAR teaches “a readily available detection system and method that provides patients access to a user-friendly tool” that will analyze and provide a diagnosis of any problems based on user images. ([0004]). Although GADIYAR’s primary concern is dental problems, GADIYAR frequently suggests that embodiments may be applicable to detecting and/or determining a probability of oral cancer. (see, e.g., [0020], [0021], [0028], [0043] and claims 34 and 36). PNG media_image5.png 210 477 media_image5.png Greyscale Like Prakash, GADIYAR teaches a user-friendly workflow to capturing the desired images. Each of the images corresponds to a predetermined region. “For example, the application may provide several different regions of the inner and outer mouth area for the user to select, such as an outer front view, an outer right view, an outer left view, an internal upper jaw, or an internal lower jaw of the teeth.” ([0023]). Because the images are being captured “locally on a user computing device (e.g., mobile device, smartphone,…,” GADIYAR’s software application provides clear instructions on how to capture those images. “Optionally, a help video or written or auditory instructions may be presented based on the selected teeth image at block S130. The help video and/or instructions may provide details on how to configure an image sensor 440 (e.g., a camera), where to distance the camera, particular angles of the camera, or lighting, among other recommendations.” ([0023]). Notably, the assistance can include guidelines. “Additionally, guidelines, frames, points, geometric shapes, or combinations thereof, may be provided to assist the user in capturing accurate and/or clear images….” ([0023]). Figures 11A-11Z are various screenshots of an “oral health score software application” that guides the user. Notably, the screenshots include one page in which reference schematic images are shown. “FIG. 11G is an example screenshot configured to present a number of various teeth images, for example, a teeth front view, a teeth right view, a teeth left view, a teeth upper jaw, and a teeth lower jaw. Each view is configured with a camera icon for the user to select which view they are planning to capture.” ([0046]). NOTE: Examiner is interpreting the “various teeth images” in Fig. 11G as “reference schematic images.” Compare Figure 11G to Applicant’s Figure 5 showing reference schematic images 120. It would have been obvious to one having ordinary skill in the art to modify the GUI pages of the application to provide, as taught in GADIYAR, reference schematic images of different locations within an oral cavity and to provide, as also taught in GADIYAR, a guide line corresponding to the reference schematic image to assist in capturing the desired image. More specifically, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from within the oral cavity in order to complete PRAKASH’s workflow for oral cancer screening. One would have been motivated to modify the GUI pages in this manner, as well as using a guideline, so that the user more readily understands which of the multiple images in the set is to be acquired next and “how to configure [the camera], where to distance the camera, particular angles of the camera, or lighting, among other recommendations.” ([0023] of GADIYAR). There would have been a reasonable expectation of success as each of GADIYAR and PRAKASH teach that GUI pages may be designed to guide the user through the workflow. NOTE: With respect to the guide line “corresponding to the reference schematic image,” each of the reference schematic images in GADIYAR correlates to one particular image in the set of images. “At block S125, the application prompts the user to select and/or provide a teeth image… Optionally, a help video or written or auditory instructions may be presented based on the selected teeth image at block S130. The help video and/or instructions may provide details on how to configure an image sensor 440 (e.g., a camera), where to distance the camera, particular angles of the camera, or lighting, among other recommendations. Additionally, guidelines, frames, points, geometric shapes, or combinations thereof, may be provided to assist the user in capturing accurate and/or clear images….” (emphasis added) ([0023]). Thus, for each “selected teeth image,” guidelines may be provided for capturing the selected image. As such, the guidelines correspond to the reference schematic image. However, PRAKASH also does not explicitly teach that an artificial intelligence graphic recognition module being communicatively connected to the image capture unit and configured to receive the at least two oral mucosa images and generate a result through a graphic recognition algorithm; and storage module communicatively connected to the artificial intelligence graphic recognition module. Nonetheless, PRAKASH does contemplate analyzing the images to determine a result (i.e., which areas in the oral cavity have suspicious lesions). “In some embodiments, step 425 includes recommendations by an automated algorithm on which areas of the image are suspect and worth a close examination by the clinician, or an estimated volume of a diseased areas, such as a lesion.” ([0425]). In the same field of endeavor, GADIYAR teaches using trained machine-learning models to analyze the images and generate a result. “[T]he application uploads one or more images to the remote trained computing device 425. It will be appreciated that the trained computing device 425 may also reside on a personal computing device (not shown). In an exemplary embodiment of the present invention, the trained computing device 425 is a machine-learning system that is trained to analyze the received images, and provide the results for an individual score at block S175, and is discussed in detail further below.” ([0024]; see also [0020]: a computing device 410 using a processor 415, memory 420, and a software application [0023]. the trained computing device 425 is a machine-learning system that is trained to analyze the received images, and provide the results for an individual score at block S175 [0024]. “[T]he machine learning system may be used by users and/or their providers as a tool for early detection of any conditions or diseases. In some embodiments, the machine learning system may be trained to predict a severity or stage of severity of a condition or disease. Further, the machine learning system may be trained to analyze an image and predict the presence or absence of one or more of:…oral cancer….”). Figure 4 and [0033] describes how to generate a plurality of trained models. It would have been obvious to one having ordinary skill in the art to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system (i.e., artificial intelligence graphic recognition module) taught in GADIYAR. One would have been motivated to replace the automated algorithm because machine-learning systems are generally better at complex-pattern recognition tasks (e.g., detecting lesions within an oral cavity) and more adaptable to real-world variation. There would have been a reasonable expectation of success as each of GADIYAR teaches that trained models can be applied to detecting oral cancer. With respect to claim 2, PRAKASH teaches that the at least two different locations include upper gingiva, palate, right buccal, right aspect of tongue, left aspect of tongue, right buccal, sublingual, and lower gingiva. See Figure 6C (e.g., “tongue left” and “tongue right”) and also Figures 9A-9F showing different regions within the oral cavity. “These are the standard positions that an oral specialist will place the tongue during an oral examination.” ([0114]). With respect to claim 4, PRAKASH teaches that the interactive oral cavity photography system is a smartphone. “Some embodiments of the invention are described below in the context of using a smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.” ([0031]). Figure 5D shows a “cell phone 584.” With respect to claim 5 (depending from claim 4), PRAKASH teaches that wherein an application program is installed in the smartphone. “FIG. 6A through FIG. 6C are block diagrams that illustrate an example graphical user interface (GUI) for an oral cavity image application for a programmable cell phone with built-in camera and processor….” ([0022]) (see also [0065]: “The cell phone 280 includes an oral cavity image application 230 configured to execute on a microprocessor main control unit (MCU) of the cell phone 280,….”). GADIYAR teaches that the smartphone is configured to execute the graphic recognition algorithm. “The trained system may reside on the local computing device or on a remote computing device (e.g., server).” ([0021]). “Any of the methods described herein may be performed locally on a user computing device (e.g., mobile device, smartphone, …” ([0022]). It would have been obvious to one having ordinary skill in the art to modify the system so that the graphic recognition algorithm is executed by smartphone as taught in GADIYAR. One would have been motivated to have the smartphone execute the graphic recognition algorithm in order to have more immediate results by avoiding delays that can occur when communicating with a remote server. There would have been a reasonable expectation of success as GADIYAR teaches that trained models can be executed by a smartphone. With respect to claim 6, PRAKASH and GADIYAR teach an artificial intelligence image recognition oral cavity cancer screening method using the interactive oral cavity photography system (see discussion of the interactive oral cavity photography system in the rejection of claim 1). The method comprises: a guiding step (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively, in PRAKASH and Figures 11A-11Z in GADIYAR): providing a reference schematic image of at least two different locations in the oral cavity (see, e.g., Figure 11G above having multiple reference schematic images. NOTE: As discussed above with respect to claim 1, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from in the oral cavity in to complete the workflow in PRAKASH.); an image capturing step ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”): capturing at least two oral mucosal images of the at least two different locations in the patient's oral cavity ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) based on a guide line corresponding to the reference schematic image (it would have been obvious to one having ordinary skill in the art to, as discussed above, based on GADIYAR’s teachings at [0023])), and digitizing the at least two oral mucosal images ([0082]: “The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….”); an artificial intelligence image recognition step: receiving the at least two oral mucosa images and generating a result through a graphic recognition algorithm (it would have been obvious to one having ordinary skill in the art, as discussed above with respect to claim 1, to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system as taught in GADIYAR at [0024] and [0020], [0033]); and a storage step: storing the at least two oral mucosa images and the result ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)). PRAKASH does not explicitly teach a login step: logging in as user. However, logging into an application, especially a medically-related application, is well-known. Nevertheless, GADIYAR teaches: “For example, a user may select a doctor login module at block S225 or a patient login module at block S230. If the user is a patient, the application requires login credentials from the user. At block S232, the application is configured to accept credentials associated with a user's social media account, such as a Google® or Facebook®, or at block S234, the user may sign in using an email account. It will be appreciated that the user may also sign in using a sign-in name or any equivalents thereof.” It would have been obvious to one having ordinary skill in the art to modify the software application to include a login step for logging in a user as taught in GADIYAR. One would have been motivated to add the login step to prevent others from accessing sensitive medical information stored on the smartphone. There would have been a reasonable expectation of success as logins are frequently required when accessing a program with medically sensitive information. With respect to claim 8 (depending from claim 6), PRAKASH teaches that the at least two different locations include upper gingiva, palate, right buccal, right aspect of tongue, left aspect of tongue, right buccal, sublingual, and lower gingiva. See Figure 6C (e.g., “tongue left” and “tongue right”) and also Figures 9A-9F showing different regions within the oral cavity. “These are the standard positions that an oral specialist will place the tongue during an oral examination.” ([0114]). With respect to claim 9 (depending from claim 6), PRAKASH teaches that the guiding step further comprises creating a new folder to store the at least two oral mucosa images before providing the guide line and the reference schematic image at least two different locations in the oral cavity. Although PRAKASH does not explicitly describe creating a new folder for image storage before providing the guide line and the reference schematic image, Examiner is interpreting the following as necessarily creating the file prior to providing the guideline and schematic images. Figure 6A of PRAKASH clearly shows a GUI page for entering a new patient’s information. Notably, the GUI page in Figure 6A occurs prior to the GUI pages presented to the user for acquiring the images, which occurs in subsequent GUI pages. “The information provided by the user/clinician in the text boxes and pull down menus of page 601 constitutes metadata for the raw images to be captured and the standardized image to be generated. Patient background data is often just as important as the images collected.” ([0109]). Once the person’s file is created, PRAKASH’s GUI requires each photo to be acquired one at a time and has a separate label for each. “The pull-down menu 624 lets the user/clinician indicate which frame is being captured among left, center, right, among others as listed in FIG. 6C, described below. Button 626 a, labeled “Press to take photos,” is configured to be activated by the user/clinician to cause the camera (e.g., mobile terminal 1300) to capture an image frame.” ([0110]). Notably, GUI page 603 shows a series of labels with each label corresponding to a different image. “The toggle buttons are filled, as depicted for toggle button 638 a, as each frame at the corresponding location is captured and stored by the application on the camera, digital camera or camera phone, other equivalent device.” (emphasis added) ([0111]). Any images acquired using the GUI page associated with a person would necessarily be saved to that person’s file. “Button 626 c, labeled ‘Next person,’ is configured to be activated by the user/clinician to cause the camera (e.g., mobile terminal 1300) to finish storing image frames and metadata for one subject so that such information can begin to be collected for another subject (e.g., to follow the YES branch from step 487 described in FIG. 4B, above.” ([0110]). With respect to claim 10 (depending from claim 6), PRAKASH teaches that the artificial intelligence image recognition step further comprises uploading the at least two oral mucosa images to a server. Specifically, PRAKASH teaches that, after step 425, “the raw frames or standard images or metadata or some combination is stored, either locally on the memory of the camera, such as memory 1351 of cell phone 280, or remotely on the oral cavity image server 192, or some combination.” ([0086]). Step 425 corresponds to the automated algorithm of PRAKASH which, as discussed above, would be replaced by the trained machine-learning system of GADIYAR. As such, the images would be uploaded to the image server after analysis by the trained machine-learning system. With respect to claim 11, PRAKASH and GADIYAR teach an artificial intelligence image recognition oral cavity cancer screening method using the interactive oral cavity photography system of claim 2 (see discussion of the interactive oral cavity photography system in the rejection of claim 2). The method comprises: a guiding step (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively, in PRAKASH and Figures 11A-11Z in GADIYAR): providing a reference schematic image of at least two different locations in the oral cavity (see, e.g., Figure 11G above having multiple reference schematic images. NOTE: As discussed above with respect to claim 1, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from in the oral cavity in to complete the workflow in PRAKASH.); an image capturing step ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”): capturing at least two oral mucosal images of the at least two different locations in the patient's oral cavity ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) based on a guide line corresponding to the reference schematic image (it would have been obvious to one having ordinary skill in the art to, as discussed above, based on GADIYAR’s teachings at [0023])), and digitizing the at least two oral mucosal images ([0082]: “The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….”); an artificial intelligence image recognition step: receiving the at least two oral mucosa images and generating a result through a graphic recognition algorithm (it would have been obvious to one having ordinary skill in the art, as discussed above with respect to claim 1, to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system as taught in GADIYAR at [0024] and [0020], [0033]); and a storage step: storing the at least two oral mucosa images and the result ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)). PRAKASH does not explicitly teach a login step: logging in as user. However, logging into an application, especially a medically-related application, is well-known. Nevertheless, GADIYAR teaches: “For example, a user may select a doctor login module at block S225 or a patient login module at block S230. If the user is a patient, the application requires login credentials from the user. At block S232, the application is configured to accept credentials associated with a user's social media account, such as a Google® or Facebook®, or at block S234, the user may sign in using an email account. It will be appreciated that the user may also sign in using a sign-in name or any equivalents thereof.” It would have been obvious to one having ordinary skill in the art to modify the software application to include a login step for logging in a user as taught in GADIYAR. One would have been motivated to add the login step to prevent others from accessing sensitive medical information stored on the smartphone. There would have been a reasonable expectation of success as logins are frequently required when accessing a program with medically sensitive information. With respect to claim 13, PRAKASH and GADIYAR teach an artificial intelligence image recognition oral cavity cancer screening method using the interactive oral cavity photography system of claim 4 (see discussion of the interactive oral cavity photography system in the rejection of claim 4). The method comprises: a guiding step (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively, in PRAKASH and Figures 11A-11Z in GADIYAR): providing a reference schematic image of at least two different locations in the oral cavity (see, e.g., Figure 11G above having multiple reference schematic images. NOTE: As discussed above with respect to claim 1, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from in the oral cavity in to complete the workflow in PRAKASH.); an image capturing step ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”): capturing at least two oral mucosal images of the at least two different locations in the patient's oral cavity ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) based on a guide line corresponding to the reference schematic image (it would have been obvious to one having ordinary skill in the art to, as discussed above, based on GADIYAR’s teachings at [0023])), and digitizing the at least two oral mucosal images ([0082]: “The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….”); an artificial intelligence image recognition step: receiving the at least two oral mucosa images and generating a result through a graphic recognition algorithm (it would have been obvious to one having ordinary skill in the art, as discussed above with respect to claim 1, to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system as taught in GADIYAR at [0024] and [0020], [0033]); and a storage step: storing the at least two oral mucosa images and the result ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)). PRAKASH does not explicitly teach a login step: logging in as user. However, logging into an application, especially a medically-related application, is well-known. Nevertheless, GADIYAR teaches: “For example, a user may select a doctor login module at block S225 or a patient login module at block S230. If the user is a patient, the application requires login credentials from the user. At block S232, the application is configured to accept credentials associated with a user's social media account, such as a Google® or Facebook®, or at block S234, the user may sign in using an email account. It will be appreciated that the user may also sign in using a sign-in name or any equivalents thereof.” It would have been obvious to one having ordinary skill in the art to modify the software application to include a login step for logging in a user as taught in GADIYAR. One would have been motivated to add the login step to prevent others from accessing sensitive medical information stored on the smartphone. There would have been a reasonable expectation of success as logins are frequently required when accessing a program with medically sensitive information. With respect to claim 14, PRAKASH and GADIYAR teach an artificial intelligence image recognition oral cavity cancer screening method using the interactive oral cavity photography system of claim 5 (see discussion of the interactive oral cavity photography system in the rejection of claim 5). The method comprises: a guiding step (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively, in PRAKASH and Figures 11A-11Z in GADIYAR): providing a reference schematic image of at least two different locations in the oral cavity (see, e.g., Figure 11G above having multiple reference schematic images. NOTE: As discussed above with respect to claim 1, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from in the oral cavity in to complete the workflow in PRAKASH.); an image capturing step ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”): capturing at least two oral mucosal images of the at least two different locations in the patient's oral cavity ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) based on a guide line corresponding to the reference schematic image (it would have been obvious to one having ordinary skill in the art to, as discussed above, based on GADIYAR’s teachings at [0023])), and digitizing the at least two oral mucosal images ([0082]: “The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….”); an artificial intelligence image recognition step: receiving the at least two oral mucosa images and generating a result through a graphic recognition algorithm (it would have been obvious to one having ordinary skill in the art, as discussed above with respect to claim 1, to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system as taught in GADIYAR at [0024] and [0020], [0033]); and a storage step: storing the at least two oral mucosa images and the result ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)). PRAKASH does not explicitly teach a login step: logging in as user. However, logging into an application, especially a medically-related application, is well-known. Nevertheless, GADIYAR teaches: “For example, a user may select a doctor login module at block S225 or a patient login module at block S230. If the user is a patient, the application requires login credentials from the user. At block S232, the application is configured to accept credentials associated with a user's social media account, such as a Google® or Facebook®, or at block S234, the user may sign in using an email account. It will be appreciated that the user may also sign in using a sign-in name or any equivalents thereof.” It would have been obvious to one having ordinary skill in the art to modify the software application to include a login step for logging in a user as taught in GADIYAR. One would have been motivated to add the login step to prevent others from accessing sensitive medical information stored on the smartphone. There would have been a reasonable expectation of success as logins are frequently required when accessing a program with medically sensitive information. PNG media_image6.png 465 1320 media_image6.png Greyscale Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. No. 2013/0209954 A1 (hereinafter “PRAKASH”) and U.S. Patent Appl. No. 2023/0237650 A1 (hereinafter “GADIYAR”) as applied to claim 1 above, and further in view of the translation of JP 2019-213652 A (hereinafter “HAMAMOTO”). With respect to claim 3, PRAKASH does not explicitly teach that the guide line correspondingly draws an outline of the reference schematic image. HAMMAMOTO teaches an imaging device for tongue diagnosis. HAMMAMOTO teaches image processing on a tongue image to image a tongue region, extract features from the tongue region, and execute tongue diagnosis by machine learning. (Lines 86-87).. To solve this problem, HAMMAMOTO describes using a target scope image TS that is superimposed over the real image on the display. (Figures 9 and 11 above). The target scope image TS is a number of interconnected lines (TS1, TS2, TS4) that outline the shape of a standard human tongue. (i.e., reference schematic) (Line 265). A centerline TS3 also goes through the middle. During imaging the user proceeds through a series of stages in which, at each stage, the user aligns a different portion of the tongue with one of the lines of the target scope image TS. (Lines 280-300). As such, the user can rapidly align the camera for taking consistent images. It would have been obvious to one having ordinary skill in the art to configure the software to provide an outline that is superimposed over the view as an image of the oral cavity is acquired. One would be motivated to use an outline of at least a portion of the schematic image in order to provide a reference for aligning the view of the camera in order to provide consistent images for review. There would have been a reasonable expectation of success as HAMMAMOTO teaches that an outline can be used to orient anatomy prior to imaging. With respect to claim 12, PRAKASH and GADIYAR teach an artificial intelligence image recognition oral cavity cancer screening method using the interactive oral cavity photography system of claim 3 (see discussion of the interactive oral cavity photography system in the rejection of claim 1). The method comprises: a guiding step (see, e.g., GUI pages 601, 602, and 603 in Figs. 6A, 6B, and 6C, respectively, in PRAKASH and Figures 11A-11Z in GADIYAR): providing a reference schematic image of at least two different locations in the oral cavity (see, e.g., Figure 11G above having multiple reference schematic images. NOTE: As discussed above with respect to claim 1, the reference schematic images would be from locations within the oral cavity as one skilled in the art would choose images from in the oral cavity in to complete the workflow in PRAKASH.); an image capturing step ([0031]: “smart phone, e.g., a programmable cell phone with on board (built-in) processor and digital camera.”): capturing at least two oral mucosal images of the at least two different locations in the patient's oral cavity ([0111]: GUI’s direct the user to capture and digitize seven images: “Left (L),” “Left of Center (LC),” “Center (C),” “Right of Center (RC),” “Right (R),” “Tongue Left (TL),” “Tongue Right (TR),” respectively; see also images in Figures 9A-9F) based on a guide line corresponding to the reference schematic image (it would have been obvious to one having ordinary skill in the art to, as discussed above, based on GADIYAR’s teachings at [0023])), and digitizing the at least two oral mucosal images ([0082]: “The raw frame image is stored in the memory of the camera, such as the cell phone 280 (e.g., in memory 1351)….”); an artificial intelligence image recognition step: receiving the at least two oral mucosa images and generating a result through a graphic recognition algorithm (it would have been obvious to one having ordinary skill in the art, as discussed above with respect to claim 1, to modify or replace the “automated algorithm” in PRAKASH with the machine-learning system as taught in GADIYAR at [0024] and [0020], [0033]); and a storage step: storing the at least two oral mucosa images and the result ([0100]: “In step 485, the one or more standard images and metadata, including any automated analysis results, are stored in memory, either on the local device or on the remote server 292.” (emphasis added)). PRAKASH does not explicitly teach a login step: logging in as user. However, logging into an application, especially a medically-related application, is well-known. Nevertheless, GADIYAR teaches: “For example, a user may select a doctor login module at block S225 or a patient login module at block S230. If the user is a patient, the application requires login credentials from the user. At block S232, the application is configured to accept credentials associated with a user's social media account, such as a Google® or Facebook®, or at block S234, the user may sign in using an email account. It will be appreciated that the user may also sign in using a sign-in name or any equivalents thereof.” It would have been obvious to one having ordinary skill in the art to modify the software application to include a login step for logging in a user as taught in GADIYAR. One would have been motivated to add the login step to prevent others from accessing sensitive medical information stored on the smartphone. There would have been a reasonable expectation of success as logins are frequently required when accessing a program with medically sensitive information. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Appl. No. 2013/0209954 A1 (hereinafter “PRAKASH”) and U.S. Patent Appl. No. 2023/0237650 A1 (hereinafter “GADIYAR”) as applied to claim 6 above, and further in view of Subhash, Narayanan, et al. "Bimodal multispectral imaging system with cloud-based machine learning algorithm for real-time screening and detection of oral potentially malignant lesions and biopsy guidance." Journal of Biomedical Optics 26.8 (2021): 086003-086003. (hereinafter “SUBHASH”). PRAKASH does not explicitly teach the method further comprising a risk warning step: providing warnings of different color lights according to the results to correspond to a risk level of oral cavity cancer, the different color lights at least include a green light, a yellow light and a red light, the green light indicates that the risk level is low risk, the yellow light indicates that the risk level is medium risk, and the red light indicates that the risk level is high risk. In the same field of endeavor, SUBHASH teaches a hand-held bimodal multispectral imaging system (BMIS) that is used for oral cancer screening. (Abstract). SUBHASH also describes software that is sued to acquire and analyze the image data. “The BMIS is thus configured to capture multimodal images of oral mucosa using its integrated hardware and proprietary software.” (2.1 Instrumentation). After acquiring different images of a region, the results are presented in a color-coded map. “The screening result is presented in a color-coded display diagram (CDD), with the pointer showing the highest R610/R545 ratio value in the ROI, representing the most malignant site in the OPML. Green color in the CDD represents healthy/normal tissue, yellow represents suspect (OPML), and red represents critical (malignant) lesions.” (2.6.Image Acquisition and Analysis of the Tissue Characteristics). It would have been obvious to one having ordinary skill in the art to configure the software to include a risk warning step, as recited in claim 7. One would have been motivated to provide a color-coded map of the various regions within the oral cavity, in which the green light indicates low risk, the yellow light indicates medium risk, and the red light indicates high risk, because such maps provide a visual representation of an entire region at once that is easier to understand. Moreover, the color-coding is one that individuals would intuitively understand (e.g., red is high risk, yellow is medium risk, and green is low risk). There would have been a reasonable expectation of success as SUBHASH demonstrates that a color-coded map can be applied to oral cancer screening results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON P GROSS whose telephone number is (571)272-1386. The examiner can normally be reached Monday-Friday 9:00-5:00CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M. Kozak can be reached at (571) 270-5284. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON P GROSS/Examiner, Art Unit 3797 /SERKAN AKAR/Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Dec 13, 2025
Non-Final Rejection — §101, §103, §112
Mar 27, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582472
SYSTEMS FOR DETERMINING SIZE OF KIDNEY STONE
2y 5m to grant Granted Mar 24, 2026
Patent 12514554
PRE-OPERATIVE ULTRASOUND SCANNING SYSTEM FOR PATIENT LIMB EXTENDING THROUGH A RESERVOIR
2y 5m to grant Granted Jan 06, 2026
Patent 12502157
ULTRASOUND SYSTEM HAVING A DISPLAY DEVICE WITH DYNAMIC SCROLL MODE FOR B-MODE AND M-MODE IMAGES
2y 5m to grant Granted Dec 23, 2025
Patent 12453602
ULTRASONIC PUNCTURE GUIDANCE PLANNING SYSTEM BASED ON MULTI-MODAL MEDICAL IMAGE REGISTRATION USING AN ITERATIVE CLOSEST POINT ALGORITHM
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+62.5%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month