Prosecution Insights
Last updated: April 19, 2026
Application No. 18/690,209

Recording Medium, Method for Generating Learning Model, and Information Processing Device

Non-Final OA §103§112
Filed
Sep 13, 2024
Examiner
LI, RAYMOND CHUN LAM
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Anaut Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
10 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§103
55.6%
+15.6% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
26.7%
-13.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 26 and 30 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 26 recites the limitation "the other portion" in line 5. There is insufficient antecedent basis for this limitation in the claim. Claim 30 recites the limitation “the recognition unit” in line 9. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 16-18, 22-26, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 20190069957 A1). Regarding Claim 16, Barral teaches a non-transitory computer readable recording medium storing a computer program causing a computer to execute processing (Paragraph [0041]: “The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described”) of: Acquiring an operative field image obtained by shooting an operative field of a scope-assisted surgery (Paragraph [0026]: “an image sensor (in camera 101) is coupled to capture a video of a surgery performed by surgical robot 121”; Paragraph [0013]: “For example, in cholecystectomy (removal of gallbladder), the systems disclosed here trains a model on frames extracted from laparoscopic videos”); and recognizing an organ portion in the acquired operative field image by inputting the acquired operative field image to a learning model trained to output information relevant to the organ portion included in the operative field image in accordance with input of the operative field image (Paragraph [0012]: “The instant disclosure provides for a system and method to recognize organs and other anatomical structures in the body while performing surgery”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation”). Barral does not explicitly teach recognizing a pancreas portion in the acquired operative field image, and outputting relevant information with regards to the pancreas portion in accordance with the input operative field image. However, Barral teaches recognizing an organ portion; it is noted that a pancreas is an organ. Considering that the system of Barral is intended for use in recognizing many different kinds of organs (Paragraph [0012]: “The instant disclosure provides for a system and method to recognize organs and other anatomical structures in the body while performing surgery”), a person having ordinary skill in the art would find it obvious that a pancreas is included in the general umbrella of being an organ. Hence, Barral implicitly teaches recognizing a pancreas portion, and outputting information relevant to the pancreas portion in accordance with the input operative field image. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to use the organ recognition system of Barral for recognizing a pancreas portion. Regarding Claim 17, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Recognizing the pancreas portion excluding a portion corresponding to a blood vessel, fat, an interlobular groove, or an interlobular shadow appearing on a surface of a pancreas, on the basis of the information output from the learning model (Paragraph [0015]: “The annotations could be toggled on/off by the surgeon, at will, and the surgeon could also specify which type of annotations are desired (e.g., highlight blood vessels but not organs)”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures”. Notes: since the model can learn to identify blood vessels, it implicitly learns to identify a pancreas portion excluding a portion corresponding to a blood vessel); and Displaying the pancreas portion recognized by excluding the portion on the operative field image (Paragraph [0012]: “The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them)”; Paragraph [0015]: “The annotations could be toggled on/off by the surgeon, at will, and the surgeon could also specify which type of annotations are desired (e.g., highlight blood vessels but not organs)”. Notes: being able to highlight the organ and not the blood vessels is implicit, considering the model is able to highlight the blood vessel and not the organ, and that the model is able to highlight the organ) Regarding Claim 18, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Changing a display mode of the pancreas portion, in accordance with a confidence of a recognition result of the pancreas portion (Paragraph [0020]: “Another aspect of this disclosure consists of the reverse system: instead of displaying to the surgeon anatomical overlays when there is high confidence”; Paragraph [0012]: “The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them)”. Notes: confidence is used to make a decision on whether the entity in question is to be classified with a particular label; this is obvious in the art, and implicit as considered in Paragraph [0020]. Additionally, it is obvious in the art that if the confidence threshold is not met, the entity is not classified, and thus wouldn’t be highlighted). Regarding Claim 22, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Switching display and non-display of the pancreas portion, in accordance with a dynamic state of a specific part in the operative field image (Paragraph [0012]: “The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them)”). Notes: It is obvious in the art that classification of an entity depends on the confidence in the classification. Additionally, a person having ordinary skill in the art would appreciate that depending on the view/current state of the entity to be classified, the entity may or may not be classified. Movement captured by a video feed (in which each frame is fed into a classification model) can result in the confidence in classifying that entity to fluctuate. Therefore, it is implicit that a dynamic state of a specific part in the operative field image can lead to switching display and non-display of the pancreas portion, as Barral teaches highlighting the recognized organ). Regarding Claim 23, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Switching the display and the non-display of the pancreas portion, in accordance with a dynamic state of a surgical tool included in the operative field image (Paragraph [0012]: “The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them)”; Paragraph [0025]: “As shown, surgical robot 121 may be used to hold surgical instruments (e.g., each arm holds an instrument at the distal ends of the arm) and perform surgery, diagnose disease, take biopsies, or conduct any other procedure a doctor could perform. Surgical instruments may include scalpels, forceps, cameras (e.g., camera 101) or the like”. Notes: It is obvious in the art that classification of an entity depends on the confidence in the classification. Additionally, a person having ordinary skill in the art would appreciate that depending on the view/current state of the entity to be classified, the entity may or may not be classified. Movement captured by a video feed (in which each frame is fed into a classification model) can result in the confidence in classifying that entity to fluctuate. Therefore, it is implicit that a dynamic state of a specific part in the operative field image, such as a moving surgical tool, can lead to switching display and non-display of the pancreas portion, as Barral teaches highlighting the recognized organ). Regarding Claim 24, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Periodically switching display and non-display of the pancreas portion (Paragraph [0022]: “Surgeons would have the option to turn on/off the real-time video interpretation engine at any time during the procedure, or have it run in the background but not display anything”). Regarding Claim 25, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Applying a predetermined effect to the display of the pancreas portion (Paragraph [0026]: “For instance, processing apparatus 107 may identify anatomical features in the video using a machine learning algorithm, and generate an annotated video where the anatomical features from the video are accentuated (e.g., by modifying the color of the anatomical features, surrounding the anatomical feature with a line, or labeling the anatomical features with characters)”) Regarding Claim 26, Barral teaches a non-transitory computer readable recording medium storing a computer program causing a computer to execute processing (Paragraph [0041]: “The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described”) of: Acquiring an operative field image obtained by shooting an operative field of a scope-assisted surgery (Paragraph [0026]: “an image sensor (in camera 101) is coupled to capture a video of a surgery performed by surgical robot 121”; Paragraph [0013]: “For example, in cholecystectomy (removal of gallbladder), the systems disclosed here trains a model on frames extracted from laparoscopic videos”); recognizing a pancreas portion and the other portion in the acquired operative field image by inputting the acquired operative field image to a first learning model and a second learning model separately(Paragraph [0012]: “The instant disclosure provides for a system and method to recognize organs and other anatomical structures in the body while performing surgery”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation”; Paragraph [0016]: “The systems and methods disclosed here also have the ability to perform real-time video segmentation and annotation during a surgical case. It is important to distinguish between spatial segmentation where, for example, anatomical structures are marked (e.g., liver, gallbladder, cystic duct, cystic artery, etc.)”; Paragraph [0017]: “For spatial segmentation, both single-task and multi-task neural networks could be trained to learn the anatomy. In other words, all the anatomy could be learned at once, or specific structures could be learned one by one”; Paragraph [0015]: “The annotations could be toggled on/off by the surgeon, at will, and the surgeon could also specify which type of annotations are desired (e.g., highlight blood vessels but not organs”. Notes: Single task neural networks are trained to learn/classify one type of object on input; hence, each neural network is its own model, which is used for identifying an organ/blood vessel or other anatomical component in Barral. Note that a pancreas is an organ; refer to the obviousness as stated in the rejection of Claim 16); where the first learning model being trained to output information relevant to the pancreas portion included in the operative field image (Paragraph [0013]: “More generally, the deep learning model can receive any number of video inputs from different types of cameras (e.g. RGB cameras, IR cameras, molecular cameras, spectroscopic inputs, etc.) and then proceed to not only highlight the organ of interest”); and the second learning model being trained to output information relevant to the other portion to be distinguished from the pancreas included in the operative field image, in accordance with input of the operative field image (Paragraph [0012]: “e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures); setting a display mode of the pancreas portion and the other portion on the operative field image, in accordance with a level of a confidence of a recognition result by the first learning model and a confidence of a recognition by the second learning model (Paragraph [0020]: “Another aspect of this disclosure consists of the reverse system: instead of displaying to the surgeon anatomical overlays when there is high confidence”; Paragraph [0015]: “The annotations could be toggled on/off by the surgeon, at will, and the surgeon could also specify which type of annotations are desired (e.g., highlight blood vessels but not organs)”; Paragraph [0012]: “The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them)”. Notes: It is obvious in the art that classification of an entity depends on the confidence in the classification. Additionally, a person having ordinary skill in the art would appreciate that depending on the view/current state of the entity to be classified, the entity may or may not be classified, or may be classified differently. It is obvious in the art that an object would be classified with the label with the highest confidence, and hence, switching display of the pancreas portion and the other portion is implicit); and displaying the pancreas portion and the other portion on the operative field image in the set display mode (Paragraph [0020]: “Another aspect of this disclosure consists of the reverse system: instead of displaying to the surgeon anatomical overlays when there is high confidence”; Paragraph [0012]: “The systems and methods disclosed here solves this problem using a computerized device to bring the knowledge gained from many similar cases to each operation. The system achieves this goal by producing an annotated video feed, or other alerts (e.g., sounds, lights, etc.) that inform the surgeon which parts of the body he/she is looking at (e.g., highlighting blood vessels in the video feed to prevent the surgeon from accidentally cutting through them). Previously, knowledge of this type could only be gained by trial and error (potentially fatal in the surgical context), extensive study, and observation. The system disclosed here provides computer/robot-aided guidance to a surgeon in a manner that cannot be achieved through human instruction or study alone. In some embodiments, the system can tell the difference between two structures that the human eye cannot distinguish between (e.g., because the structures' color and shape are similar)”). Regarding Claim 29, Barral teaches a method for generating a learning model causing a computer to execute processing (Paragraph [0041]: “The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described”) of: Acquiring training data including an operative field image obtained by shooting an operative field of a scope-assisted surgery (Paragraph [0026]: “an image sensor (in camera 101) is coupled to capture a video of a surgery performed by surgical robot 121”; Paragraph [0013]: “For example, in cholecystectomy (removal of gallbladder), the systems disclosed here trains a model on frames extracted from laparoscopic videos”), and Ground truth data indicating a pancreas portion in the operative field image (Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation”. Notes: ground truth data is inherent to deep learning/classification learning models. A pancreas is an organ, which is covered by anatomical structures. Refer to obviousness analysis of the rejection of Claim 16); and Generating a learning model for outputting information relevant to the pancreas portion in accordance with input of the operative field image on the basis of a set of acquired training data pieces (Paragraph [0012]: “The instant disclosure provides for a system and method to recognize organs and other anatomical structures in the body while performing surgery”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation”). Regarding Claim 30, Barral teaches an information processing device (Paragraph [0041]: “The processes explained above are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described”), comprising: One or more processors (Paragraph [0033]: “As shown, system 200 includes… processing apparatus”); and A storage storing instructions causing any of the one or more processors to execute processing of (Paragraph [0041]: “The techniques described may constitute machine-executable instructions embodied within a tangible or non-transitory machine (e.g., computer) readable storage medium, that when executed by a machine will cause the machine to perform the operations described”): Acquiring an operative field image obtained by shooting an operative field of a scope-assisted surgery (Paragraph [0026]: “an image sensor (in camera 101) is coupled to capture a video of a surgery performed by surgical robot 121”; Paragraph [0013]: “For example, in cholecystectomy (removal of gallbladder), the systems disclosed here trains a model on frames extracted from laparoscopic videos”); Recognizing a pancreas portion in the acquired operative field image by inputting the acquired operative field image to a learning model trained to output information relevant to the pancreas portion included in the operative field image in accordance with input of the operative field image (Paragraph [0012]: “The instant disclosure provides for a system and method to recognize organs and other anatomical structures in the body while performing surgery”; Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation. Notes: A pancreas is an organ, which is covered by anatomical structures. Refer to obviousness analysis of the rejection of Claim 16); and Outputting information based on a recognition result by the recognition unit (Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures… Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them… In some embodiments, a distinctive color or a label can then be added to the annotation). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 20190069957 A1) in view of Stack Overflow (2020 Election Insights: A Lot of Gray and Purple, 2020). Regarding Claim 19, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a recording medium wherein the computer program causing the computer to further execute processing of: Displaying the pancreas portion in a set display color in the operative field image (Paragraph [0013]: “Once image classification has been learned by the algorithm, the device may use a sliding window approach to find the relevant structures in videos and highlight them, for example by delineating them with a bounding box. In some embodiments, a distinctive color or a label can then be added to the annotation”. Notes: Organ is an anatomical structure. Refer to analysis of obviousness of the rejection of Claim 16). Barral does not teach averaging a set display color of the pancreas portion with the color of the pancreas portion in the operative field image, and coloring and displaying the recognized pancreas portion in the averaged color. However, Stack Overflow teaches the concept of averaging colors for display (Post by eft, edited by Ilmari Karonen demonstrates averaging two colors (via different methods such as rgb values). While Stack Overflow does not teach averaging colors with respect to entities in an image, the concept of doing so is obvious in the art. Barral and Stack Overflow are analogous in the art with respect to the use of color for visual display. A common motivation in the art is to mix (average) colors to produce different gradients of a color for various effects, such as relating the colors with information pertaining to the objects that are that particular color, or otherwise improving the visual appeal of the display, as is implied in Stack Overflow (user41871, edited by User: “I don't know whether taking a simple average of the components is the "best" from a perceptual point of view (that sounds like a question for a psychologist), but here are a couple of examples using simple component averaging”). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the displaying of a pancreas portion in a certain set color of Barral with the averaging of color and motivation for doing so of Stack Overflow; Doing so would yield the predictable result of highlighting the pancreas portion in a color that may be more visually appealing in the context of the image. Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 20190069957 A1) in view of El Katerji (US 20200222607 A1). Regarding Claim 20, the recording medium according to Claim 16 is rejected over Barral. Barral teaches a learning model wherein the learning model recognizes the pancreas portion (Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures”. Notes: Organs are included in anatomical structures; refer to analysis of obviousness in the rejection of Claim 16). Barral does not teach a plurality of learning models, in which a model is selected in accordance with an attribute of a patient, prior information before surgery, a surgical approach, a part of the pancreas, or a type of imaging device shooting the operative field. However, El Katerji teaches selecting a learning model from a plurality of available learning models in accordance with attributes of a patient (Paragraph [0014]: “In some implementations, the methods and systems access a model by determining a selected model from a plurality of available models. In some implementations, the selected model is determined based on information associated with the patient. In some implementations, the method includes choosing a model formed by a neural network”) Barral and El Katerji are considered analogous in the art with respect to learning models involving organs. A generic motivation in the art is to use a learning model that performs optimally. In general, different learning models have different advantages, such as how the quantity of inputted data affects the output. As a result, one ordinarily skilled in the art would seek to try using a plurality of learning models on a particular task; choosing the best performing model with regards to known parameters such as inputted data dimensions are an established practice. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the pancreas recognition learning model of Barral with the learning model selection process in accordance with attributes of a patient of El Katerji; Doing so would yield the predictable result of a learning model that optimally recognizes a pancreas portion based on patient information. Regarding Claim 21, the recording medium according to Claim 16 is rejected over Barral. Barral as modified by Katerji teaches a learning model wherein the learning model includes a plurality of types of learning models for recognizing the pancreas portion (El Katerji, Paragraph [0014]: “In some implementations, the methods and systems access a model by determining a selected model from a plurality of available models. In some implementations, the selected model is determined based on information associated with the patient. In some implementations, the method includes choosing a model formed by a neural network; Barral, Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures”. Notes: Organ is an anatomical structure. Refer to analysis of obviousness of the rejection of Claim 16), and the computer program causing the computer to further execute further execute processing of: Recognizing the pancreas portion included in the operative field image by using a learning model selected on the basis of an evaluation result (Barral, Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures”). Barral as modified by Katerji does not explicitly teach evaluating each of the learning models, on the basis of information output from each of the learning models when the operative field image is input. However, it is obvious motivation in the art to evaluate the performance of models for a specific task, and choose the best performing model for the task. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the selection of a model from a plurality of learning models for identifying a pancreas portion of Barral as modified by Katerji with the common motivation to select the best performing model of a plurality of learning models; Doing so would yield the predictable result of adhering to best practices regarding learning models, and result in a learning model that performs the recognition task of a pancreas optimally. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 20190069957 A1) in view of Al Jazaery (US 20210201661 A1). Regarding Claim 27, the recording medium according to Claim 26 is rejected over Barral. Barral teaches a computer program that executes processing of selectively displaying a pancreas portion recognized by a first learning model or the other portion recognized by a second learning model based on confidence (Paragraph [0013]: “The instant disclosure trains a machine learning model (e.g., a deep learning model) to recognize specific anatomical structures within surgical videos, and highlight these structures”; Paragraph [0020]: “instead of displaying to the surgeon anatomical overlays when there is high confidence”). Barral does not teach selectively displaying the pancreas portion or the other portion based on confidence when the pancreas portion or the other portion overlap. However, Al Jazaery teaches displaying a recognized portion or the other recognized portion based on confidence when the portions overlap (Paragraph [0081]: “In some embodiments, the first image processing process is a single-pass detection process (e.g., the first input image is passed through the first image processing process only once and all first ROIs (if any) are identified such as You-Only-Look-Once detection or Single-Shot-Multibox-Detection algorithms)… determining, using a first neural network (e.g., the first neural network has previously been trained using labelled images with predefined objects and bounding boxes), a plurality of bounding boxes each encompassing a predicted predefined portion of the human user (e.g., a predicted upper body of the human user, e.g., with the locations of the head and shoulders labeled), wherein a center of the predicted predefined portion of the human user falls within the respective grid cell, and wherein each of the plurality of bounding boxes is associated with a class confidence score indicating a confidence level of a classification (e.g., the type of the object, e.g. a portion of the human body)… In some embodiments, the class confidence score is a product of localization confidence and classification confidence); and identifying a bounding box with a highest class confidence score in the respective grid cell (e.g., each grid cell will only predict at most one object by removing duplicate bounding boxes through non-maximum suppression process that keeps the bounding box with the highest confidence score and removes any other boxes that overlap the bounding box with the highest confidence score by more than a certain threshold”). Barral and Al Jazaery are considered analogous in the art with respect to the use of a learning model to classify portions of an image. A common motivation in the art is to ensure that classification attempts are as accurate as possible; the use of confidence to do so is inherent to the art. In a case where a portion has information that leads to conflicted classification (such as a portion having attributes that cause it to be potentially identifiable under multiple classes), the portion is classified based on the highest confidence score of the confidence scores of each class. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the learning models of Barral utilizing confidence to identify a pancreas portion and other portion with the method of dealing with overlapping portions with regards to classification; Doing so would yield the predictable result of identifying a pancreas portion and another portion with the highest likelihood of high accuracy. Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 20190069957 A1) in view of Selterman (2020 Election Insights: A Lot of Gray and Purple, 2020), Regarding Claim 28, the recording medium according to Claim 26 is rejected over Barral. Barral teaches a computer program causing the computer to further execute processing of: Displaying colors, in accordance with the confidence of the first learning model recognizing a pancreas portion, and the confidence of the second learning model recognizing the other portion (Paragraph [0013]: “In some embodiments, a distinctive color or a label can then be added to the annotation” Notes: Barral is capable of identifying multiple organs/anatomical structures, so displaying the annotation (highlight) for each anatomical structure in a distinct color displays each annotation in a different color) Barral does not teach mixing display colors for regions that overlap with respect to the pancreas portion and the other portion. However, Selterman teaches mixing display colors, when a region recognized under one label overlaps with a region recognized under another label, and displaying the region with the mixed color (The first picture, depicting a map of the United States, clearly shows political leanings with regards to the 2020 election. Each state has a ratio attributed to percentage of voters that support Biden vs Trump; in states with heavy overlap (similar number of Biden vs Trump voters). Such states are displayed in various shades of purple, depending on the ratio. States that are much more clearly composed of both voters for Biden vs voters for Trump are in colors closer to blue and red, respectively. Barral and Selterman are considered analogous in the art with respect to identifying/recognizing regions of an image with a particular label. A common motivation in the art is to display overlapping areas with a contextual relationship in a color that is a mix of the unique color of each area for the overlapping area. Therefore, it would have been obvious to a person having ordinary skill in the art to combine the mixing of colors in overlapping regions of Selterman with the method of displaying colors of specific regions identified via learning models in Barral; Doing so would yield the predictable result of allowing the viewer to more easily identify areas of overlap with regards to anatomical structures of the human body as displayed in the image. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAYMOND CHUN LAM LI whose telephone number is (571)272-5124. The examiner can normally be reached M-F 8:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAYMOND CHUN LAM LI/Examiner, Art Unit 2614 /KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Sep 13, 2024
Application Filed
Mar 11, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month