Prosecution Insights
Last updated: April 19, 2026
Application No. 18/074,408

SURGICAL IMAGING SYSTEM AND METHOD

Non-Final OA §103§112
Filed
Dec 02, 2022
Examiner
RODRIGUEZ, ANTHONY JASON
Art Unit
2672
Tech Center
2600 — Communications
Assignee
Nsight Surgical Inc.
OA Round
1 (Non-Final)
17%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
-5%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
3 granted / 18 resolved
-45.3% vs TC avg
Minimal -21% lift
Without
With
+-21.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
47 currently pending
Career history
65
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
16.1%
-23.9% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Claims 14-18 and 19-20 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to nonelected species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 07/17/2025. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 13 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “approximately” in claim 13 is a relative term which renders the claim indefinite. The term “approximately” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention, rendering the limitation “to locate a third target location of the first target object at approximately the second time” indefinite. For the purposes of examination, the limitation has been interpreted as “to locate a third target location of the first target object.” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 4-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah et al. (US20200253683A1) hereinafter referenced as Amanatullah, in view of Robaina et al. (US20180197624A1) hereinafter referenced as Robaina. Regarding claim 1, Amanatullah discloses: A method for surgical object imaging within a surgical space (Amanatullah: Abstract) comprising: accessing a first set of images captured by a set of fixed optical sensors arranged about and facing the surgical space (Amanatullah: 0019: “Generally, throughout the surgery, the computer system can access a stream of images recorded by a set of (i.e., one or more) cameras arranged in or facing the surgical space.”); aggregating the first set of images into a three-dimensional representation of the surgical space (Amanatullah: 0022: “the computer system can: access a first set of color frames recorded by a set of color cameras arranged in the surgical space at approximately a first time; compile this first set of color frames into a first (composite) image defining a first 3D color point cloud represented the surgical space based on known locations of the set of color cameras; process this first 3D color point cloud according to methods and techniques described below to detect and characterize objects in the surgical space at the first time;”); detecting a first constellation of objects, moving within the surgical space, in the three- dimensional representation (Amanatullah: 0008: “ As shown in FIGS. 1, 2A, 2B, and 4, a method S 100 for tracking objects within a surgical space during a surgery includes, based on a first image depicting the surgical space at a first time: detecting a first constellation of objects in the surgical space at the first time in Block S 110”; 0026: “The computer system then: stores locations of individual objects detected in the image in a 3D (or 2D) constellation of objects; and writes a timestamp from the image to this constellation of objects.”); based on the three-dimensional representation of the surgical space, for each object in the first constellation of objects: extracting a first location of the object (Amanatullah: 0025: “The computer system then: stores locations of individual objects detected in the image in a 3D (or 2D) constellation of objects; and writes a timestamp from the image to this constellation of objects.”); detecting a first object type of the object; deriving a first surgical status of the object (Amanatullah: 0031: “The computer system can then annotate the current object constellation with object type, state, and/or orientation (or “pose”) labels for each object detected in the image ”); calculating a first ranking score for the object based on the first object type and the first surgical status (Amanatullah: 0033: “The computer system can also implement object-tracking techniques to track objects over sequential images of the surgical space. In one implementation, the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation)”; Wherein the addition of the object for tracking constitutes the assignment of a ranking score.); and storing the first location, the first object type, the first surgical status, and the first ranking score in an object container in a set of object containers (Amanatullah: 0032: “The computer system can therefore generate an object constellation representing 3D (or 2D) locations of objects throughout the surgical space within a time interval (e.g., 50 milliseconds) represented by the current image of the surgical space. The computer system can repeat this process for each image recorded by the camera (or generated from frames received from the set of cameras) in the surgical space to generate an object constellation—annotated with object types, orientations, and/or poses of objects in the surgical space—for each of these images.”; Wherein a set of object constellations are generated, one for each generated 3D image.); selecting a first target object, in the first constellation of objects, at a first time based on a first target ranking score of the first target object; selecting a second target object, in the first constellation of objects, at a second time succeeding the first time based on a second target ranking score of the second target object, the second target ranking score greater than the first target ranking score of the first target object (Amanatullah: 0033-0034: “the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation); derives velocities of these objects based on their changes in position over these images (or over these object constellations); and annotates object representations in the current object constellation with velocities of the objects they connote. Furthermore, by tracking objects from preceding images to the current image, the computer system can port last assessments of contamination, injury, and retention scores for objects in the surgical space into the current time interval” Wherein the tracking of objects based on their detection in a previous constellation constitutes a tracking based on a target ranking score, wherein once the object has been detected in the current iteration, it no longer needs to be detected.); and deriving a set of trajectories of the first constellation of objects based on object types, locations, surgical statuses, and ranking scores stored in the set of object containers (Amanatullah: 0032: “The computer system can therefore generate an object constellation representing 3D (or 2D) locations of objects throughout the surgical space within a time interval (e.g., 50 milliseconds) represented by the current image of the surgical space. The computer system can repeat this process for each image recorded by the camera (or generated from frames received from the set of cameras) in the surgical space to generate an object constellation—annotated with object types, orientations, and/or poses of objects in the surgical space—for each of these images.”; Wherein the set of generated object constellations constitute the set of trajectories for the first constellation of objects.). Amanatullah does not disclose expressly: articulating a mobile camera to locate a first target location of the first target object in a field of view of the mobile camera; articulating the mobile camera to locate a second target location of the second target object in the field of view of the mobile camera. Robaina discloses a user wearable system (Robaina: 0101: “The wearable systems can use various sensors (e.g., accelerometers, gyroscopes, temperature sensors, movement sensors, depth sensors, GPS sensors, inward-facing imaging system, outward-facing imaging system, etc.) to determine the location and various other attributes of the environment of the user.”) able to perform object detection on captured images for the purposes of storing object locations in a map database (Robaina: 0102: “One or more object recognizers 708 can crawl through the received data (e.g., the collection of points) and recognize and/or map points, tag images, attach semantic information to objects with the help of a map database 710 . The map database 710 may comprise various points collected over time and their corresponding objects.”). Wherein map data points are supplemented by stationary cameras (Robaina: 0101: “This information may further be supplemented with information from stationary cameras in the room that may provide images or various cues from a different point of view.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the wearable device as disclosed by Robaina as a mobile camera for detection and tracking of objects disclosed by Amanatullah. The suggestion/motivation for doing so would have been “ In some embodiments, all data is stored and all computations are performed in the local processing and data module 260 , allowing fully autonomous use from a remote module.” (Robaina: 0054; Wherein local image processing distributes processing across multiple hardware systems.). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Amanatullah with Robaina to obtain the invention as specified in claim 1. Regarding claim 4, Amanatullah in view of Robaina discloses: The method of Claim 1: further comprising, during a setup period: accessing an initial set of images captured by the set of fixed optical sensors arranged about and facing the surgical space; detecting a set of overlapping features between the initial set of images; deriving a set of transforms based on the set of overlapping features between the initial set of images; and applying the set of transforms to the initial set of images to assemble an initial three-dimensional map in a set of three-dimensional maps of the surgical space (Amanatullah: 0022: “the computer system can: access a first set of color frames recorded by a set of color cameras arranged in the surgical space at approximately a first time; compile this first set of color frames into a first (composite) image defining a first 3D color point cloud represented the surgical space based on known locations of the set of color cameras”; Wherein the images captured concurrently constitutes a an overlapping feature,); and wherein aggregating the first set of images into the three-dimensional representation of the surgical space comprises: applying the set of transforms to the first set of images to assemble a first three- dimensional map in the set of three-dimensional maps of the surgical space; and combining the set of three-dimensional maps of the surgical space into the three-dimensional representation of the surgical space (Amanatullah: 0022: “compile this first set of color frames into a first (composite) image defining a first 3D color point cloud represented the surgical space based on known locations of the set of color cameras; process this first 3D color point cloud according to methods and techniques described below to detect and characterize objects in the surgical space at the first time; and repeat this process for groups of concurrent images recorded by these cameras during subsequent time intervals (e.g., 50-millisecond time intervals) during the remainder of the surgery.”; Wherein the 3D color point cloud images are together able to represent the surgical space across time intervals.). Regarding claim 5, Amanatullah in view of Robaina discloses: The method of Claim 1: wherein selecting the first target object, in the first constellation of objects, comprises selecting the first target object comprising a needle driver, in the first constellation of objects, at the first time based on the first target ranking score of the needle driver; wherein articulating the mobile camera to locate the first target location of the first target object comprises articulating the mobile camera to locate the first target location of the needle driver in the field of view of the mobile camera (Amanatullah: 0025: “Upon receipt of a next image from the camera (or upon generation of a next image from concurrent frames received from the set of cameras), the computer system scans the image for features representative of surgical tools (e.g., graspers, clamps, needle drivers, retractors, distractors, cutters, suction tips, microscopes), ”; Wherein the needle driver may be located using the wearable system’s camera.); further comprising: identifying a second location of the second target object comprising a needle, in the first constellation of objects, at a third time between the first time and the second time; detecting proximity of the needle to the needle driver based on the first target location and the second location; and calculating the second target ranking score of the needle based on proximity of the needle to the needle driver at the third time; wherein selecting the second target object, in the first constellation of objects, comprises selecting the needle, in the first constellation of objects, at the second time based on the second target ranking score of the needle; and wherein articulating the mobile camera to locate the second target location of the second target object comprises articulating the mobile camera to locate the second target location of the needle in the field of view of the mobile camera (Amanatullah: 0025: “upon receipt of a next image from the camera (or upon generation of a next image from concurrent frames received from the set of cameras), the computer system scans the image for features representative of surgical tools …consumables (e.g., lap sponges, needles, knife blades, saw blades),”; 0029: “The computer system can also confirm identification of an object based on proximity of another object, such as: proximity of a needle driver to confirm a needle detected in the image”; Wherein the needle may be detected, and thus have its location confirmed in the object constellation, based on whether the needle driver has been previously identified to be in proximity.). Regarding claim 6, Amanatullah in view of Robaina discloses: The method of Claim 5, further comprising: detecting an initial set of immutable objects, fixed in the surgical space, in the three- dimensional representation of the surgical space at an initial time, the initial set of immutable objects comprising a patient located on an operating table (Amanatullah: 0011: “the computer system can detect and track a constellation of objects moving within the surgical space…relative to an operating table, a back table, a floor, and other fixed infrastructure in the surgical space.”; 0030: “ The computer system can also: implement face detection to detect faces of surgical staff and the patient in the image; detect bodies connected to these faces; identify a patient by presence over the operating table”; Wherein the detection of a patent based on whether it is on a fixed operating table constitutes the patent on the operating table being assigned as an immutable object.); and extracting a third location of the patient located on the operating table from the three- dimensional representation of the surgical space (Amanatullah: 0025: “the computer system can estimate a lateral, longitudinal, and depth position of the centroid of the volume or area of an object detected in the current image relative to an origin defined in the surgical space.”); and calculating a first distance between the second location of the needle and the third location of the patient located on the operating table at a third time; and in response to the first distance falling below a threshold distance, increasing the second target ranking score of the needle (Amanatullah: 0119: “ the computer system can detect removal of the particular object from the surgical space, such as by tracking exit of the particular object via a doorway of the surgical space. Once the computer system detects disposal or removal of the particular object, the computer system can disable contamination, injury, and retention tracking for this object and remove the particular object from calculations of subsequent contamination, injury, and retention risks and scores for other objects in the surgical space”; Wherein likewise, if the object is within the surgical space, thus having its distance from the operating table being below a threshold, the object is added for subsequent tracking, which constitutes increasing its ranking score.). Regarding claim 7, Amanatullah in view of Robaina discloses: The method of Claim 1, further comprising: detecting an initial set of immutable objects, fixed in the surgical space, in the three- dimensional representation of the surgical space at an initial time, the initial set of immutable objects comprising: a prep table; an operating table; and a patient located on the operating table (Amanatullah: 0011: “the computer system can detect and track a constellation of objects moving within the surgical space…relative to an operating table, a back table, a floor, and other fixed infrastructure in the surgical space.”; 0030: “ The computer system can also: implement face detection to detect faces of surgical staff and the patient in the image; detect bodies connected to these faces; identify a patient by presence over the operating table”; Wherein the detection of a patent based on whether it is on a fixed operating table constitutes the patent on the operating table being assigned as an immutable object. In addition, the back table constitutes the prep table.); and extracting a second location of the prep table, a third location of the operating table, and a fourth location of the patient located on the operating table from the three- dimensional representation of the surgical space (Amanatullah: 0024-0025: “based on a first image depicting the surgical space at a first time, detecting a first constellation of objects in the surgical space at the first time…the computer system can estimate a lateral, longitudinal, and depth position of the centroid of the volume or area of an object detected in the current image relative to an origin defined in the surgical space.”; Wherein the first image defines the 3D color point cloud, and wherein objects and their locations are detected in each image interval.). Regarding claim 8, Amanatullah in view of Robaina discloses: The method of Claim 7: further comprising: identifying a fifth location of the first target object comprising a surgical sponge, in the first constellation of objects, at the initial time prior to the first time; detecting proximity of the surgical sponge to a member of surgical staff within the surgical space; and calculating the first target ranking score of the surgical sponge based on proximity of the surgical sponge to the member of surgical staff at the initial time (Amanatullah: 0024-0025: “based on a first image depicting the surgical space at a first time, detecting a first constellation of objects in the surgical space at the first time…the computer system scans the image for features representative of surgical tools…, surgical drapes, consumables (e.g., lap sponges, needles, knife blades, saw blades)…the computer system can estimate a lateral, longitudinal, and depth position of the centroid of the volume or area of an object detected in the current image relative to an origin defined in the surgical space.”; Wherein the objects are added to the image interval’s constellation, which constitutes an assignment of a target ranking, based on their detection in the surgical room, their detection/addition indicating they are within proximity of surgical room and its objects, such as surgical staff.); wherein selecting the first target object, in the first constellation of objects, comprises selecting the surgical sponge, in the first constellation of objects, at the first time based on the first target ranking score of the surgical sponge; wherein articulating the mobile camera to locate the first target location of the first target object comprises articulating the mobile camera to locate the first target location of the surgical sponge in the field of view of the mobile camera (Amanatullah: 0033: “the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation)”; Wherein an object is selected for tracking, which is performed with the wearable device disclosed by Robaina, based on its addition to the first object constellation.); further comprising: identifying a sixth location of the second target object comprising a loaded needle driver, in the first constellation of objects, at a third time between the first time and the second time (Amanatullah: 0082: “ the computer system can also weight a retention risk of the particular object according to a driver condition of the particular object. For example, a knife blade installed on a knife handle, a suture needle stored in a needle tray or retained by a needle driver, and a surgical sponge retained by forceps may represent little or no retention risk to the patient”; Wherein objects detected include a needle drive retaining a suture needle.); detecting proximity of the loaded needle driver to the operating table based on the sixth location and the third location; and calculating the second target ranking score of the loaded needle driver based on proximity of the needle driver to the operating table at the third time (Amanatullah: 0024-0025: “based on a first image depicting the surgical space at a first time, detecting a first constellation of objects in the surgical space at the first time…the computer system scans the image for features representative of surgical tools…the computer system can estimate a lateral, longitudinal, and depth position of the centroid of the volume or area of an object detected in the current image relative to an origin defined in the surgical space.”; Wherein the objects are added to the image interval’s constellation, which constitutes an assignment of a target ranking, based on their detection in the surgical room, their detection/addition indicating they are within proximity of surgical room and its objects, such as the operating table.); wherein selecting the second target object, in the first constellation of objects, comprises selecting the loaded needle driver, in the first constellation of objects, at the second time based on the second target ranking score of the loaded needle driver; and wherein articulating the mobile camera to locate the second target location of the second target object comprises articulating the mobile camera to locate the second target location of the loaded needle driver in the field of view of the mobile camera (Amanatullah: 0033: “the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation)”; Wherein an object is selected for tracking, which is performed with the wearable device disclosed by Robaina, based on its addition to the first object constellation.). Regarding claim 9, Amanatullah in view of Robaina discloses: The method of Claim 1: further comprising: identifying a second location of the first target object comprising a surgical sponge, in the first constellation of objects, at an initial time prior to the first time; detecting proximity of the surgical sponge to a patient located on the operating table at a third location within the surgical space; and calculating the first target ranking score of the surgical sponge based on proximity of the surgical sponge to the patient located on the operating table at the initial time based on the second location and the third location (Amanatullah: 0024-0025: “based on a first image depicting the surgical space at a first time, detecting a first constellation of objects in the surgical space at the first time…the computer system scans the image for features representative of surgical tools…, surgical drapes, consumables (e.g., lap sponges, needles, knife blades, saw blades)…the computer system can estimate a lateral, longitudinal, and depth position of the centroid of the volume or area of an object detected in the current image relative to an origin defined in the surgical space.”; Wherein the objects are added to the image interval’s constellation, which constitutes an assignment of a target ranking, based on their detection in the surgical room, their detection/addition indicating they are within proximity of surgical room and its objects, such as a patent.); wherein selecting the first target object, in the first constellation of objects, at the first time comprises selecting the first target object comprising the surgical sponge, in the first constellation of objects, at the first time based on the first target ranking score of the surgical sponge; wherein articulating the mobile camera to locate the first target location of the first target object comprises articulating the mobile camera to locate the first target location of the surgical sponge in the field of view of the mobile camera (Amanatullah: 0033: “the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation)”; Wherein an object is selected for tracking, which is performed with the wearable device disclosed by Robaina, based on its addition to the first object constellation.); and further comprising, in response to detecting absence of the first target object in the field of view of the mobile camera at a third time between the first time and the second time: predicting a probability of loss of the surgical sponge inside the patient located on the operating table (Amanatullah: 0014: “probability that a particular object will be retained (e.g., unintentionally left behind) in the patient upon completion of the surgery may increase inversely with distance between the particular object and the patient, may increase with time that the particular object is in contact with the patient (or the patient's wound specifically), and may be a function of whether or how the object particular is retained.”; 0079: “detect a particular object (e.g., of a type commonly inserted into or placed in contact with a patient, such as a lap sponge, a suture needle, a knife)—in the current image”; Wherein the retention rate increasing as the object is detected, using the wearable device disclosed by Robaina, to be within the patent includes a failure to detect removal of object.); and in response to the probability of loss exceeding a probability of loss threshold: issuing an alarm for manual survey of surgical sponges in the surgical space; and serving a prompt, to surgical staff in the surgical space, to retrieve a quantity of surgical sponges (Amanatullah: 0117: “in response to a retention score of a particular object in the surgical space exceeding a threshold retention score, the computer system can target a notification to retrieve the particular object—including a description of the particular object—from the patient to the primary surgeon and to the surgical assistant.”). Amanatullah in view of Robaina does not disclose expressly: and in response to the probability of loss exceeding a probability of loss threshold: issuing an alarm for manual survey of surgical sponges in the surgical space; and serving a prompt, to surgical staff in the surgical space, to return a quantity of surgical sponges to a disposal container. However, Amanatullah alternatively discloses: in response to a contamination score exceeding a contamination threshold: issuing an alarm for manual survey of surgical sponges in the surgical space; and serving a prompt, to surgical staff in the surgical space, to return a quantity of surgical sponges to a disposal container (Amanatullah: 0112: “ in response to a contamination score of a particular object in the surgical space exceeding a threshold contamination score and if this particular object is disposable (e.g., lap sponge, a suture needle), the computer system can: identify a particular surgical staff member currently handling or otherwise nearest the particular object; and target a notification to discard the particular object directly to this particular surgical staff member.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the exceeded retention threshold notification disclosed by Amanatullah in view of Robaina with the exceeded contamination threshold notification disclosed by alternative teaching of Amanatullah. The suggestion/motivation for doing so would have been “ Once the computer system detects disposal or removal of the particular object, the computer system can disable contamination, injury, and retention tracking for this object and remove the particular object from calculations of subsequent contamination, injury, and retention risks and scores for other objects in the surgical space.” (Amanatullah: 0119). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Amanatullah in view of Robaina with the alternative teaching of Amanatullah to obtain the invention as specified in claim 9. Regarding claim 10, Amanatullah in view of Robaina discloses: The method of Claim 1: further comprising: accessing a surgical record from a surgery database, the surgical record defining: a type of surgery and a set of trajectories of a constellation of objects for the type of surgery (Amanatullah: 0121-122: “ the computer system calculates spatial and temporal variance of paths of individual objects consumed during the surgery and calculates an average retention time from removal of objects from inventory (e.g., from a object tray, from the back table) to use of these objects at the patient (or to disposal of these objects) (i.e., “retention time”)...the computer system can also flag this surgery or this surgical staff for post-operative review: if these efficiency, complexity, and efficacy metrics for this surgery and surgical staff deviate significantly from historical metrics for similar surgeries or surgical staff…The computer system can also flag periods of the surgery in which a subset of objects within the surgical space traversed anomalous paths—for their object types—through the surgical space and then prompt a reviewer to specifically review these periods of the surgery”; Wherein object trajectories in the current surgery may be compared to historical trajectories/metrics for surgeries of a similar type.). Amanatullah in view of Robaina does not disclose expressly: accessing a surgical record from a surgery database at an initial time preceding the first time, the surgical record defining: a type of surgery; a primary surgeon associated with the type of surgery; a duration of the type of surgery; and an initial set of trajectories of an initial constellation of objects for the type of surgery; and extracting a set of initial mobile camera positions from the initial set of trajectories of the initial constellation of objects for the type of surgery; wherein articulating the mobile camera to locate the first target location of the first target object comprises: predicting a first target position of the mobile camera based on the set of initial mobile camera positions; and articulating the mobile camera to the first target position to locate the first target location of the first target object in the field of view of the mobile camera; and wherein articulating the mobile camera to locate the second target location of the second target object comprises: predicting a second target position of the mobile camera based on the set of initial mobile camera positions; and articulating the mobile camera to the second target position to locate the second target location of the second target object in the field of view of the mobile camera. Robaina further discloses: the wearable device, prior to a surgeon performing a surgery, extracting information regarding a surgery being performed, the surgeon performing the surgery, and its duration based on information associated with either the patient or operating room (Robaina: 0316: “The wearable device can determine the type of the surgery based on the patient's virtual medical records or information associated with the operating room. In this example, the wearable device can determine that the surgeon 2102 is performing a minimally invasive heart surgery based on the scheduling information of the operating room”). Wherein, based on the determined surgery, the wearable device may analyze the actions of the surgeon and determine whether they are correct (Robaina: 0339: “ the wearable device can determine, based on the contextual information, whether a threshold condition for generating an alert is met at block 2450 . The threshold condition may include a mistake in a medical procedure. For example, the wearable device can determine that standard steps of a surgery that is performed on a patient. The standard steps may include processes A, B, and C. The wearable device can monitor the surgeon's actions and detect that the surgeon has skipped the process B. ”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the initial surgery detection and surgery step tracking system further taught by Robaina for the detection of anomalous object trajectory disclosed by Amanatullah in view of Robaina. The suggestion/motivation for doing so would have been “the wearable device can present an alert to the user of the wearable device. The alert may include a focus indicator or an alert message indicating the mistake, the threshold condition, or a corrective measure of the mistake, etc.” (Robaina: 0340). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Amanatullah in view of Robaina does not disclose expressly: wherein articulating the mobile camera to locate the first target location of the first target object comprises: predicting a first target position of the mobile camera based on the set of initial mobile camera positions; and articulating the mobile camera to the first target position to locate the first target location of the first target object in the field of view of the mobile camera; and wherein articulating the mobile camera to locate the second target location of the second target object comprises: predicting a second target position of the mobile camera based on the set of initial mobile camera positions; and articulating the mobile camera to the second target position to locate the second target location of the second target object in the field of view of the mobile camera. Robaina further discloses: the generation of a virtual object containing a list of surgical tools located in the sterile region. Wherein the surgical device provides alerts regarding surgical tools and their last detected location to surgical staff wearing the wearable device (Robaina: Figure 25: Virtual Object 2520; 0363: “The wearable device can present the list of surgical instruments in the sterile region 2510 as shown by the virtual object 2520…the wearable device can show the phrase “bone curette” in a different color on the virtual object 2520 . In response to the alert message, the surgeon can remove the bone curette from the sterile region 2510 . Once the wearable device observes that the bone curette has been moved to the outside of the sterile region, the wearable device can remove the phrase “bone curette” from the virtual object 2520 .”; 0368: “In addition, the FOV of a user's wearable device may cover only a portion of the sterile region 2510 and may not be able to track every object in the sterile region 2510 . Furthermore, when a user may look away from the sterile region 2510 or leaves the operating room, other users that are interacting with the sterile region 2510 can continue to track the medical instruments entering or exiting the sterile region 2510 .”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the virtual object and notifications regarding objects within a location further taught by Robaina for alerting surgical staff regarding the updating of object locations in the system for tracking object locations disclosed by Amanatullah in view of Robaina. The suggestion/motivation for doing so would have been to allow surgical staff to constantly be aware of the last detected locations of surgical tools within the surgical space (Robaina: 0363 & 0368). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Amanatullah in view of Robaina with the further teaching of Robaina to obtain the invention as specified in claim 10. Regarding claim 11, Amanatullah in view of Robaina discloses: The method of Claim 10, further comprising: characterizing a difference between the initial set of trajectories of the initial constellation of objects and the first set of trajectories of the first constellation of objects at a third time; and in response to the difference between the initial set of trajectories of the initial constellation of objects and the first set of trajectories of the first constellation of objects exceeding a difference threshold: generating a notification alerting surgical staff within the surgical space of the difference; and transmitting the notification to surgical staff (Robaina: 0339-0340: “the wearable device can determine, based on the contextual information, whether a threshold condition for generating an alert is met at block 2450 . The threshold condition may include a mistake in a medical procedure...the wearable device can determine that the threshold condition is met when the surgeon's scalpel is less than a threshold distance to the patient's left leg 22041 when the operation should be on the patient's right leg 2204 r. If the threshold condition is met, at optional block 2460 , the wearable device can present an alert to the user of the wearable device. The alert may include a focus indicator or an alert message indicating the mistake, the threshold condition, or a corrective measure of the mistake, etc.”; Wherein if the trajectory of an object, such as of a scalpel, reaches a mistake threshold for a particular step in a procedure, then an alert is generated and transmitted to the surgical staff performing the mistake.). Regarding claim 12, Amanatullah in view of Robaina discloses: The method of Claim 1: wherein articulating the mobile camera to locate the first target object comprises articulating the mobile camera, facing and configured to image the surgical space at a first optical resolution, to locate the first target location of the first target object in the field of view of the mobile camera (Robaina: 0104-0105: “Based on this information and collection of points in the map database, the object recognizers 708 a to 708 n may recognize objects in an environment. For example, the object recognizers can recognize the patient, body parts of the patient (such as e.g., limbs, torso, head, organs, etc.), medical equipment (such as, e.g., surgical tools or medical devices), as well as other objects in a room (such as, e.g., windows, walls, etc.) or other persons in the room (such as, e.g., attending physicians, nurses, etc.)…The object recognitions may be performed using a variety of computer vision techniques. For example, the wearable system can analyze the images acquired by the outward-facing imaging system 464 (shown in FIG. 4)”; Wherein the wearable system iteratively performs image analysis in order detect and track objects in the environment.); and further comprising: accessing a first sequence of images captured by the mobile camera at the first optical resolution at a third time; extracting a first set of locations and a first set of times of the first target object from the first sequence of images; and storing the first set of locations and the first set of times of the first target object in a first target object container in the set of object containers (Amanatullah: 0025-0026: “upon receipt of a next image from the camera (or upon generation of a next image from concurrent frames received from the set of cameras), the computer system scans the image for features representative of surgical tools…The computer system then: stores locations of individual objects detected in the image in a 3D (or 2D) constellation of objects; and writes a timestamp from the image to this constellation of objects.”; Wherein upon detection of the object, by either the wearable device or fixed cameras, the object’s location and time detected are stored in the object constellation.). Claim(s) 2 and 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah in view of Robaina, and further in view of Gove et al. (US2015248772A1) hereinafter referenced as Gove. Regarding claim 2, Amanatullah in view of Robaina discloses: The method of Claim 1. Amanatullah in view of Robaina does not disclose expressly: in response to detecting absence of the first target object in the field of view of the mobile camera at a third time succeeding the first time, articulating the mobile camera to locate a third target location of the first target object in the field of view of the mobile camera; and in response to detecting absence of the second target object in the field of view of the mobile camera at a fourth time succeeding the second time, articulating the mobile camera to locate a fourth target location of the second target object in the field of view of the mobile camera. Gove discloses: an imaging system able to track objects around the user, wherein the imaging system is able to notify the user when tracked objects are no longer detected, and thus have been marked as lost (Gove: Abstract: “The processing circuitry may track objects located around the user using the wide-angle images. The processing circuitry may issue an alert when tracked objects are determined to be a hazardous or when tracked objects may have been lost.”; 0045: “ At step 130 , device 10 may issue a warning (e.g., an alert or alarm) to user 42 to inform user 42 that the object of interest has been lost from the field of view…After receiving the warning, the user may assess environment 40 to confirm whether the object of interest has been lost or stolen.”). Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the processing circuitry for detecting whether an object has been lost taught by Gove for the subsequent tracking of objects in the first object constellation disclosed by Amanatullah in view of Robaina. The suggestion/motivation for doing so would have been “Distractions in public settings can cause accidents for the user such as causing the user to trip on or walk into obstacles, can generate a risk of theft of the mobile electronic device or other items belonging to the user, and can result in the user losing or leaving behind items due to a lack of awareness of their surroundings” (Gove: 0002). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Amanatullah in view of Robaina with Gove to obtain the invention as specified in claim 2. Regarding claim 3, Amanatullah in view of Robaina discloses: The method of Claim 1: wherein in the case where a member of surgical staff to presents a target object within the field of view of the mobile camera, the system performs: the extraction of a subsequent location and a subsequent surgical status of the target object from the three-dimensional representation of the surgical space (Robaina: 0101-0102: “The wearable systems can use various sensors…to determine the location and various other attributes of the environment of the user. This information may further be supplemented with information from stationary cameras in the room that may provide images or various cues from a different point of view…One or more object recognizers 708 can crawl through the received data (e.g., the collection of points) and recognize and/or map points, tag images, attach semantic information to objects with the help of a map database 710 . The map database 710 may comprise various points collected over time and their corresponding objects.”; Wherein the mobile camera uses points from a map database to establish locations of objects in the environment.) (Amanatullah: 0034: “by tracking objects from preceding images to the current image, the computer system can port last assessments of contamination, injury, and retention scores for objects in the surgical space into the current time interval.”; Wherein upon subsequent detection of the object, previously extracted information regarding the object, including its status, is ported to the subsequent detection iteration); and storing the subsequent location and the subsequent surgical status of the target object in a subsequent object container in the set of object containers (Amanatullah: 0033: “The computer system can also implement object-tracking techniques to track objects over sequential images of the surgical space. In one implementation, the computer system: implements object-tracking techniques to track objects from preceding images to the current image (or from previous object constellations to the current object constellation)”). Amanatullah in view of Robaina does not disclose expressly: wherein articulating the mobile camera to locate the first target location of the first target object comprises: calculating a first target orientation of the mobile camera to locate the first target location of the first target object in the field of view of the mobile camera; and generating a first prompt for a member of surgical staff to reposition the mobile camera within the surgical space to the target orientation; and further comprising, in response to detecting absence of the second target object in the field of view of the mobile camera at a third time succeeding the second time: generating a second prompt for the member of surgical staff to present the second target object within the field of view of the mobile camera. Gove discloses: an imaging system able to track objects around the user, wherein the imaging system is able to notify the user when tracked objects are no longer detected, and thus have been marked as lost (Gove: Abstract: “The processing circuitry may track objects located around the user using the wide-angle images. The processing circuitry may issue an alert when tracked objects are determined to be a hazardous or when tracked objects may have been lost.”; 0045: “ At step 130 , device 10 may issue a warning (e.g., an alert or alarm) to user 42 to inform user 42 that the object of interest has been lost from the field of view…After receiving the warning, the us
Read full office action

Prosecution Timeline

Dec 02, 2022
Application Filed
Sep 12, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499701
DOCUMENT CLASSIFICATION METHOD AND DOCUMENT CLASSIFICATION DEVICE
2y 5m to grant Granted Dec 16, 2025
Patent 12488563
Hub Image Retrieval Method and Device
2y 5m to grant Granted Dec 02, 2025
Patent 12444019
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND MEDIUM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
17%
Grant Probability
-5%
With Interview (-21.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month