DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 25 February 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 32-36, 43, 46-49, and 52 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 2019/0223961 A1) in view of Bychkov (US 2017/0329916 A1), and further in view of Wolf (US 2020/0273563 A1).
Claim 32, Barral teaches a system comprising:
a memory storing instructions (computer hardware of Barral inherently includes a memory to store computer software; see paragraph 0046);
a processor communicatively coupled to the memory and configured to execute the instructions (computer hardware of Barral inherently includes a processor to execute instructions; see paragraph 0046) to:
access imagery of a scene of a medical session captured by a plurality of sensors from a plurality of viewpoints (Barral discloses capturing images from a birds-eye view of the procedure, images from within the incision, images of the surgical room; paragraph 0031, 0029), the imagery including first imagery captured by a first sensor of the plurality of sensors from a first viewpoint of the plurality of viewpoints (camera captures images; paragraph 0021);
determine, during the medical session and based on the first imagery, a value of an activity visibility metric for the first sensor (Barral discloses that a machine learning algorithm automatically optimizes camera placement based on image content. To perform such an optimization, a visibility metric must inherently be determined by the algorithm; see paragraph 0023-0024); and
facilitate, based on the value of the activity visibility metric, adjusting the first viewpoint of the first sensor (algorithm can automatically optimize camera placement; paragraph 0024).
Barral is silent regarding wherein the activity visibility metric is based on a general visibility of the first imagery and a specific visibility of an activity identified in the first imagery.
Bychkov teaches wherein an activity metric (“many-valued quality score; paragraph 0250) is based on a general visibility of first imagery (general quality scores may be used to describe the general quality of the signal, e.g., overall luminance and signal-to-noise ratio; see paragraph 0248, 0250) and specific visibility of an activity identified in the first imagery (the score may take into account the degree of an object associated with the physiological process being located within the field of view; paragraph 0250).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used the teaching of Bychkov with that of Barral in order to improve the collection of physiological data using scores indicative of many factors (see paragraph 0019-0021 of Bychkov).
Barral in view of Bychkov is silent regarding wherein the activity comprises a predefined phase of the medical session.
Wolf teaches a system to determine, during the medical session (surgical procedure; paragraph 0089) and based on first imagery (image captured by camera 115; paragraph 0089), a value of an activity visibility metric for the first sensor (control application evaluates whether the target objects escapes the field of view of the camera; paragraph 0090), the activity visibility metric based on a general visibility of the first imagery and a specific visibility of an activity identified in the first imagery (evaluation is based on the visibility of the target object and depending on events occurring during the surgical procedure; paragraph 0089-0090), the activity comprising a predefined phase of the medical session (control application directs camera to track an ROI, the control of cameras may be rule-based and follow an algorithm developed for a given surgical procedure; paragraph 0089); and
facilitate, based on the value of the activity visibility metric, adjusting the first viewpoint of the first sensor (camera 115 tracks a moving object based on the determination of the evaluation; paragraph 0090).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used the teaching of Wolf with that of the cited prior art in order to recommend remedial action when an abnormal event is detected (see paragraph 0023-0024 of Wolf).
Claim 33, Barral further teaches wherein the facilitating adjusting the first viewpoint of the first sensor comprises providing an output to a robotic system to instruct the robotic system to change a pose of the first sensor (the system can automatically optimize camera placement and move the camera; see paragraph 0024).
Claim 34, Barral in view of Bychkov and Wolf teaches the system of claim 32, but does not expressly teach wherein the facilitating adjusting the first viewpoint of the first sensor comprises providing an output to a user to instruct the user to change a pose of the first sensor.
However, Barral teaches wherein the system may output commands to a user to cue the user to take certain actions (paragraph 0037).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have recognized that the system may also cue the user to adjust the position of a camera in order to optimize camera placement as disclosed in paragraph 0024 of Barral.
Claim 35, Barral further teaches wherein the instructions comprise a machine learning model trained based on training imagery labeled with an activity of scenes captured in the training imagery (machine learning algorithm; see paragraph 0023); and
the determining the value of the activity visibility metric for the first sensor comprises using the machine learning model (machine learning algorithm can automatically optimize camera placement; paragraph 0024).
Claim 36, Barral further teaches wherein the processor is further configured to execute the instructions to:
access additional imagery of the scene of the medical session captured by the plurality of sensors from another plurality of viewpoints, the additional imagery including second imagery captured by the first sensor from a second viewpoint different from the first viewpoint (images are continuously captured to automatically optimize camera placement; paragraph 0021, 0024); and
determine, based on the additional imagery, an additional value of the activity visibility metric that is higher than the value of the activity visibility metric (feedback loop continually adjusts the location applied by the robot 121, i.e., a metric is continually evaluated to determine the most optimal placement; paragraph 0026).
Claim 43, Barral further teaches wherein the value of the activity visibility metric represents a rating of how visible an activity of the scene is in the first imagery (“automatically optimize camera placement (e.g., move the camera to the position that shows the most of the surgical area, or the like);” paragraph 0024).
Claims 46-49 are analyzed and rejected as method claims for performing the functions of the system of claims 32-35, respectively.
Claim 52, Wolf further teaches wherein: the imagery includes second imagery captured by a second sensor of the plurality of sensors from a second viewpoint of the plurality of viewpoints (cameras 114, 121, 12, 125; paragraph 0089); and the activity identified in the first imagery is identified further based on the second imagery (one or more other cameras may be used to identify a surgical event, e.g., a source of the bleeding; paragraph 0088. See also paragraph 0117 for discussion of identifying surgical phases in video via computer analysis.).
Claim(s) 41, 42, 44, and 45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Barral (US 2019/0223961 A1) in view of Zhang (CN 110851978 A1, English translation provided), and further in view of Bychkov (US 2017/0329916 A1) and Wolf (US 2020/0273563 A1).
Claim 41, Barral in view of Bychkov and Wolf teaches the system of claim 32, Barral further teaches wherein:
the imagery of the scene includes second imagery captured by a second sensor of the plurality of sensors from a second viewpoint of the plurality of viewpoints (second camera 201b; paragraph 0028);
but Barral in view of Bychkov and Wolf is silent regarding wherein the processor is further configured to execute the instructions to determine, based on the first imagery and the second imagery, an overall value of the activity visibility metric for the plurality of sensors; and
the facilitating adjusting the first viewpoint of the first sensor comprises adjusting the first viewpoint to improve the overall value of the activity visibility metric.
Zhang teaches multiple cameras (paragraph 0024) wherein imagery of the scene includes first and second imagery from first and second viewpoints captured by first and second sensors of a plurality of sensors (paragraph 0059),
a processor configured to execute the instructions to determine (inherent in the system of Zhang), based on the first imagery and the second imagery, an overall value of the activity visibility metric for the plurality of sensors (visibility values; paragraph 0080-0081); and
the facilitating adjusting the first viewpoint of the first sensor comprises adjusting the first viewpoint to improve the overall value of the activity visibility metric (global optimization is performed to optimize visibility values; paragraph 0080).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used the teaching of Zhang with that of the cited prior art in order to improve visibility analysis to achieve optical camera positioning (see paragraph 0010 of Zhang).
Claim 42, Barral in view of Zhang, Bychkov, and Wolf teaches the system of claim 41, but is silent regarding wherein the facilitating adjusting the first viewpoint results in a lower value of the activity visibility metric for the first sensor and a higher overall value of the activity visibility metric for the plurality of sensors.
However, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have realized that global optimization of the visibility of multiple cameras may sacrifice the visibility of a single camera in order to improve visibility of the entire system (see paragraphs 0080-0086)
Claim 44, Barral teaches a system comprising:
a memory storing instructions (computer hardware of Barral inherently includes a memory to store computer software; see paragraph 0046);
a processor communicatively coupled to the memory and configured to execute the i instructions (computer hardware of Barral inherently includes a processor to execute instructions; see paragraph 0046) to:
access imagery of a scene of a medical session captured by a plurality of sensors from a plurality of viewpoints (Barral discloses capturing images from a birds-eye view of the procedure, images from within the incision, images of the surgical room; paragraph 0031, 0029), the imagery including first imagery captured by a first sensor of the plurality of sensors from a first viewpoint of the plurality of viewpoints (camera captures images; paragraph 0021);
determine, based on the first imagery, a first classification of an activity of the scene (activity is determined; see paragraph 0015-0016);
determine, during the medical session and based on the first imagery, a value of an activity visibility metric for the first sensor (Barral discloses that a machine learning algorithm automatically optimizes camera placement based on image content. To perform such an optimization, a visibility metric must inherently be determined by the algorithm; see paragraph 0023-0024);
but Barral is silent regarding wherein the processor is configured to:
determine that the value of the activity visibility metric for the first sensor is below a threshold value of the activity visibility metric; and
lower, based on the determining that the value of the activity visibility metric for the first sensor is below the threshold value of the activity visibility metric, a weighting of the first classification of the activity of the scene for determining an overall classification of the activity of the scene based on the imagery of the scene.
Zhang teaches a system comprising multiple cameras (paragraph 0024) to determine that the value of the activity visibility metric for the first sensor is below a threshold value of the activity visibility metric (threshold for visibility; paragraph 0071); and
lower, based on the determining that the value of the activity visibility metric for the first sensor is below the threshold value of the activity visibility metric, a weighting of the first classification of the activity of the scene for determining an overall classification of the activity of the scene based on the imagery of the scene (weight coefficients are changed based on paragraph 0072).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used the teaching of Zhang with that of Barral in order to improve visibility analysis to achieve optical camera positioning (see paragraph 0010 of Zhang).
Barral in view of Zhang is silent regarding determining, during the medical session and based on the first imagery, a value of an activity visibility metric for the first sensor, the activity visibility metric based on a general visibility of the first imagery and a specific visibility of an activity identified in the first imagery, the activity comprising a predefined phase of the medical session.
Wolf teaches a system to determine, during the medical session (surgical procedure; paragraph 0089) and based on first imagery (image captured by camera 115; paragraph 0089), a value of an activity visibility metric for the first sensor (control application evaluates whether the target objects escapes the field of view of the camera; paragraph 0090), the activity visibility metric based on a general visibility of the first imagery and a specific visibility of an activity identified in the first imagery (evaluation is based on the visibility of the target object and depending on events occurring during the surgical procedure; paragraph 0089-0090), the activity comprising a predefined phase of the medical session (control application directs camera to track an ROI, the control of cameras may be rule-based and follow an algorithm developed for a given surgical procedure; paragraph 0089); and
facilitate, based on the value of the activity visibility metric, adjusting the first viewpoint of the first sensor (camera 115 tracks a moving object based on the determination of the evaluation; paragraph 0090).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have used the teaching of Wolf with that of the cited prior art in order to recommend remedial action when an abnormal event is detected (see paragraph 0023-0024 of Wolf).
Claim 45, Zhang further teaches wherein the processor is further configured to execute the instructions to facilitate, based on the determining that the value of the activity visibility metric for the first sensor is below the threshold value of the activity visibility metric, adjusting the first viewpoint of the first sensor (camera position is optimized from the visibility value; paragraph 0080-0081).
Allowable Subject Matter
Claims 37-40 and 51 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding dependent claims 37 and 51, the prior art does not teach or render obvious a processor to determine that the value of the activity visibility metric for the first sensor is below a threshold value of the activity visibility metric, and use, based on the determining that the value of the activity visibility metric for the first sensor is below the threshold value of the activity visibility metric, a generative model to produce generated imagery based on the second imagery and the third imagery, in combination with the remaining limitations of the claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 attached.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHIAWEI A CHEN whose telephone number is (571)270-1707. The examiner can normally be reached Mon-Fri 12:00pm - 9:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHIAWEI CHEN/Primary Examiner, Art Unit 2637