DETAILED ACTION
Priority
Applicant claims priority for parent Application No. JP2020-122015, filed on July 2nd, 2020. Examiner acknowledges Applicant’s claim for priority.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on January 1st 2023 is being considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1, 2, 4-7, and 10-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2, 4-7, and 10-18 are rejected as they are dependent on independent claims.
The term “central” in claim 1, 19 and 20 is a relative term which renders the claim indefinite. Per MPEP 2173.05(b), “when a term of degree is used in the claim, the examiner should determine whether the specification provides some standard for measuring that degree. … If the specification does not provide some standard for measuring that degree, a determination must be made as to whether one of ordinary skill in the art could nevertheless ascertain the scope of the claim (e.g., a standard that is recognized in the art for measuring the meaning of the term of degree).”
The term “central” is not defined by the claim and the specification does not provide a standard for ascertaining the requisite degree. Further, there is no recognized standard for what constitutes “central” for a portion of a display screen. As such, one of ordinary skill in the art could not ascertain the scope of the term.
Because the claim uses relative terminology that does not have a standard for evaluating the meaning of the term, one of ordinary skill in the art would not be able to determine the boundaries of the claim. Therefore, the claim is indefinite. The Examiner will understand “central”, under broadest reasonable interpretation, to be any information that positions the user to be in close proximity to a display.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4-7, and 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1
Claims 1-20 recite subject matter within a statutory category as a process, machine, and/or article of manufacture. However, it will be shown in the following steps, that claims 1-20 are nonetheless unpatentable under 35 U.S.C. 101.
Step 2A Prong One
Claim 1 states:
A medical display system, comprising:
a camera;
a microphone;
a display screen;
and a processor configured to:
control display of display information on the display screen based on information output from a medical device;
control the camera to capture an imaging region, wherein the imaging region is a part of a region from which at least the display screen is viewable;
acquire, via the microphone, a voice in the region from which at least the display screen is viewable;
recognize, based on the captured imaging region and the acquired voice, a first user candidate from a plurality of user candidates present in the region;
determine a specific condition based on information regarding a specific situation in a surgery;
determine the first user candidate one of satisfies or does not satisfy the specific condition in the imaging region;
control, in a case where the candidate satisfies the specific condition in the imaging region, the display information based on one of a voice of the first user candidate or an input triggered by the voice of the first user candidate,
wherein the information regarding the specific situation includes information regarding a position of the first user candidate, and the information regarding the position of the first user candidate includes a position corresponding to a central portion of the display screen in a horizontal width of the display screen;
and control, in a case where the first user candidate does not satisfy the specific condition in the imaging region, the display information based on one of a voice of a second user candidate of the plurality of user candidates different from the first user candidate, or an input triggered by the voice of the second user candidate.
The broadest reasonable interpretation of these steps includes mental processes and/or organizing human activity because each bolded component can practically be performed by the human mind or with pen and paper. Other than reciting generic computer terms like “display”, “camera”, “microphone” or “processor”, nothing in the claims precludes the bold-font portions from practically being performed by a healthcare provider. For example, but for the “processor” language, “determine a specific condition based on information regarding a specific situation in a surgery;” in the context of this claim encompasses organizing human activity of a healthcare provider writing down patient information in an operating room. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” or “Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Similarly, these steps of:
recognize, based on the captured imaging region and the acquired voice, a first user candidate from a plurality of user candidates present in the region;
determine a specific condition based on information regarding a specific situation in a surgery;
determine the first user candidate one of satisfies or does not satisfy the specific condition in the imaging region;
the information regarding the specific situation includes information regarding a position of the first user candidate, and the information regarding the position of the first user candidate includes a position corresponding to a central portion of the display screen in a horizontal width of the display screen;
as drafted, could also describe a healthcare provider recognizing the surgical team’s position in an operating room and adjusting the team’s orientation accordingly to improve the operation before a procedure in an operating room. Therefore, under the broadest reasonable interpretation, these steps include managing personal behavior as an abstract idea.
Independent claims 19 and 20 cover similar steps of controlling an imaging region. These claims fall under the same category of an abstract idea and follows the same rationale as claim 1.
Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claim 5, reciting particular aspects of how “wherein the information regarding the first user candidate includes correspondence information in which information regarding a feature of an operator is associated with information regarding a feature of a surgeon.” may be performed in the mind but for recitation of generic computer components).
Dependent claims 5, 6, 10, 11, 13, 15 and 18 add additional elements to their parent claims which will be further inspected in the following steps for a practical application to their abstract idea.
Step 2A Prong Two
This judicial exception of “Mental Processes” or “Organizing Human Activity” is not integrated into a practical application. Independent claim 1's system recites additional elements such as “display”, “camera”, “microphone” or “processor”. Independent claim 19 and 20’s method and device recite the same additional elements and generic components listed above. The “display”, “camera”, and “processor” will be treated as a generic computer component. The “microphone” will be treated as an additional element. In particular, these additional elements do not integrate the abstract idea into a practical application because the additional elements:
amount to mere instructions to apply an exception (such as recitation of “A medical display system, comprising: a camera; a microphone; a display screen; and a processor configured to:” amounts to invoking computers as a tool to perform the abstract idea, see applicant's specification [0158] “a program constituting the software is installed to a computer incorporated in dedicated hardware, a general-purpose personal computer, or the like.” recites general computer components to carry out this task, see MPEP 2106.05(f))
add insignificant extra-solution activity to the abstract idea (such as recitation of “control the camera to capture an imaging region, wherein the imaging region is a part of a region from which at least the display screen is viewable” and “control, in a case where the candidate satisfies the specific condition in the imaging region, the display information based on one of a voice of the first user candidate or an input triggered by the voice of the first user candidate,” amounts to mere data gathering, recitation of “control, in a case where the first user candidate does not satisfy the specific condition in the imaging region, the display information based on one of a voice of a second user candidate of the plurality of user candidates different from the first user candidate, or an input triggered by the voice of the second user candidate.” and “via the microphone” amounts to insignificant application, see MPEP 2106.05(g))
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claim 18 “the processor is further configured to detect the line-of- sight of the one of the first user candidate or the second user candidate within the imaging region” are additional limitations which add insignificant extra-solution activity to the abstract idea which amounts to mere data gathering; claim 5 “wherein the information regarding the first user candidate includes correspondence information in which information regarding a feature of an operator is associated with information regarding a feature of a surgeon.” and claim 6 “wherein the information regarding the checking operation before the start of the surgery includes information regarding an operation authority including a plurality of categories, and the processor is further configured to allocate the first user candidate who has performed the checking operation before the start of the surgery into the plurality of categories of the operation authority.” and claim 10 “wherein, in a case where the plurality of user candidates satisfies the specific condition, the processor is further configured to set a priority order of an operation authority for each of the plurality of user candidates.” and claim 11 “wherein the processor is further configured to update the priority order of the operation authority based on a situation of the surgery with details of the information registered before the start of the surgery as an initial state.” and claim 13 “wherein the processor is further configured to temporarily transfer an operation authority to a designated user candidate of the plurality of user candidates in a case where the first user candidate issues a first voice command for user change” and claim 15 “wherein the processor is further configured to recognize a fourth user candidate of a plurality of user candidates that is different from the first user candidate and the second user candidate and exclude the fourth user candidate from the plurality of user candidates in a case where the fourth user candidate does not satisfy the specific condition” are additional limitations which add insignificant extra-solution activity to the abstract idea by selecting a particular data source or type of data to be manipulated).
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
The remaining dependent claims (2, 4, 7, 12, 14, 16 and 17) do not recite additional elements or activity but further narrow or define the abstract idea embodied in the claims and hence also do not integrate the aforementioned abstract idea into a practical application
Step 2B
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception and add insignificant extra-solution activity to the abstract idea. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields.
As previously noted, the claim recites an additional element of a microphone. MacCaughelty (US6055501) demonstrates with “transducer means is preferably a conventional microphone” that microphones were conventional long before the priority data of the claimed invention. As such, this additional element, individually and in combination with the prior additional element, does not amount to significantly more.
To elaborate:
“control the camera to capture an imaging region, wherein the imaging region is a part of a region from which at least the display screen is viewable” is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
“control, in a case where the candidate satisfies the specific condition in the imaging region, the display information based on one of a voice of the first user candidate or an input triggered by the voice of the first user candidate,” is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
“and control, in a case where the first user candidate does not satisfy the specific condition in the imaging region, the display information based on one of a voice of a second user candidate of the plurality of user candidates different from the first user candidate, or an input triggered by the voice of the second user candidate.” is equivalently, selecting information, based on types of information and availability of information, for collection, analysis and display, Electric Power Group, LLC v. Alstom S.A;
Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea. Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. To elaborate:
claim 5 “wherein the information regarding the first user candidate includes correspondence information in which information regarding a feature of an operator is associated with information regarding a feature of a surgeon.”, is equivalently, arranging a hierarchy of groups, and sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015)
claim 6 “wherein the information regarding the checking operation before the start of the surgery includes information regarding an operation authority including a plurality of categories, and the processor is further configured to allocate the first user candidate who has performed the checking operation before the start of the surgery into the plurality of categories of the operation authority.” , is equivalently, electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii);
claim 10 “wherein, in a case where the plurality of user candidates satisfies the specific condition, the processor is further configured to set a priority order of an operation authority for each of the plurality of user candidates.” , is equivalently, electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii);
claim 11 “wherein the processor is further configured to update the priority order of the operation authority based on a situation of the surgery with details of the information registered before the start of the surgery as an initial state.”, is equivalently, electronic recordkeeping, Alice Corp., MPEP 2106.05(d)(II)(iii);
claim 13 “wherein the processor is further configured to temporarily transfer an operation authority to a designated user candidate of the plurality of user candidates in a case where the first user candidate issues a first voice command for user change” , is equivalently, arranging a hierarchy of groups, and sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015)
claim 15 “wherein the processor is further configured to recognize a fourth user candidate of a plurality of user candidates that is different from the first user candidate and the second user candidate and exclude the fourth user candidate from the plurality of user candidates in a case where the fourth user candidate does not satisfy the specific condition” is equivalently, arranging a hierarchy of groups, and sorting information, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015)
claim 18 “the processor is further configured to detect the line-of- sight of the one of the first user candidate or the second user candidate within the imaging region” is equivalently, receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i);
Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4-6, 10-12 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nawana et al. (US20140081659) in view of Paul (US20190279640).
Regarding claim 1, Nawana teaches.
A medical display system, comprising:
a camera; ([0129] “A person skilled in the art will appreciate that data can be captured in a variety of ways, e.g., using a camera)
a microphone; ([0129] “smartphone”, which has a microphone in viewable range of the phone’s display unit; see also [0209] “Non-limiting examples of no-touch controls include … voice recognition” which functions using the microphone in computers, mobile phones or other microphone enabled devices)
a display screen; ([0024] “Providing the determined medical diagnoses can include showing the determined medical diagnoses on the display” where the medical diagnosis are an output from a medical device to a display)
and a processor configured to: ([0021] “a medical system is provided that in one embodiment includes a processor”)
control display of display information on the display screen based on information output from a medical device; ([0205] “FIG. 16 illustrates a user interface 80 showing instrument information in the form of a list of instruments used in the surgical procedure” where the user interface [i.e., the display] outputs information on the list of instruments [i.e., a medical device] used in a surgical procedure)
control the camera to capture an imaging region, wherein the imaging region is a part of a region from which at least the display screen is viewable; ([0129] “A person skilled in the art will appreciate that data can be captured in a variety of ways, e.g., using a camera (standalone or integrated into another device such as a mobile phone or tablet); a video camera (standalone or integrated into another device such as a mobile phone or tablet); one or more sensors ( e.g., gyro, accelerometer, global position system (GPS), etc.) on a smartphone” where the imaging unit [i.e., a camera] is within viewable range of a display [i.e., on a smartphone]; see also [0209] “controls can also be used in any combination, e.g., looking at a specific location in a spatial field room and speaking a specific activation word.” Where the imaging region is a part of a region from which the display screen is viewable)
acquire, via the microphone, a voice in the region from which at least the display screen is viewable; ([0209] “no touch examples include… voice recognition (e.g., speaking a certain key term or key phrase to cause a certain display to be shown on the user interface)” which functions using the microphone enabled devices; see also [0209] “controls can also be used in any combination, e.g., looking at a specific location in a spatial field room and speaking a specific activation word.” Where the user is in a viewable range of the display screen)
determine a specific condition based on information regarding a specific situation in a surgery; ([0271] “the medical personnel can be tracked throughout the surgical procedure such that at any given time during the procedure (including when the patient may not necessarily be present, such as during setup and clean up), the personnel tracking module 236 can be configured to gather data regarding medical personnel present in the OR” where the data gathered on the medical personnel’s location [i.e., determine a specific condition] determines who is present in the operating room [i.e., based on information regarding a specific situation in a surgery])
determine the first user candidate one of satisfies or does not satisfy the specific condition in the imaging region; ([0272] “The medical personnel can be registered and tracked in a variety of ways, similar to that discussed above regarding equipment tracking by the equipment tracking module 232, such as by using one or more of optical and/or infrared cameras in the OR,” where the infrared cameras determine the location [i.e., determines if the user satisfies a specific condition in the imaging region])
control, in a case where the candidate satisfies the specific condition in the imaging region, the display information based on one of a voice of the first user candidate or an input triggered by the voice of the first user candidate ([0209] “In another exemplary embodiment, the system 10 can be configured to allow selective switching between the different displays via no-touch controls. In this way, medical personnel can quickly receive data without compromising sterility and without having to change their position relative to the patient and without having to free their hands from instrument(s) and/or other surgical duties. No-touch controls can be… voice recognition (e.g., speaking a certain key term or key phrase to cause a certain display to be shown on the user interface)”)
wherein the information regarding the specific situation includes information regarding a position of the first user candidate, ([0271] “the medical personnel can be tracked throughout the surgical procedure such that at any given time during the procedure (including when the patient may not necessarily be present, such as during setup and clean up), the personnel tracking module 236 can be configured to gather data regarding medical personnel present in the OR” where tracking the location of personnel includes position of the first user)
and the information regarding the position of the first user candidate includes a position corresponding to a central portion of the display screen in a horizontal width of the display screen; (see [0271] above; see also “[0272] The medical personnel can be registered and tracked in a variety of ways, similar to that discussed above regarding equipment tracking by the equipment tracking module 232, such as by using one or more of optical and/or infrared cameras in the OR, smart clothing monitors, sensors in instruments, motion capture, and body sensors.” Where the tracking can occur all throughout the facility; see also [Paul 0014] “microphones is/are positioned in the center or at some other suitable location of an operating room or other medical facility to be controlled” where the display’s microphones, which gather voice instructions to perform surgeries, are located centrally in the operating room)
Regarding claim 1 Nawana does not explicitly teach, as taught by Paul:
recognize, based on the captured imaging region and the acquired voice, a first user candidate from a plurality of user candidates present in the region; ([Figure 8] voice-enabled digital conversational assistant system (800); see also [0035] “the present system may include facial and/or voice recognition to distinguish between different speakers” where facial and voice recognition recognize a user candidate withing a region; see optionally [0014] “microphones is/are positioned in the center or at some other suitable location of an operating room” where the display’s microphones are located within a central region)
and control, in a case where the first user candidate does not satisfy the specific condition in the imaging region, the display information based on one of a voice of a second user candidate of the plurality of user candidates different from the first user candidate, or an input triggered by the voice of the second user candidate. (see [0016] above, where the user controls a display and [Paul 0034] “The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict.” Where the speaker’s priority [i.e., in a case where the first user does not satisfy the specific condition in the imaging region] commanding an instruction includes displaying information based on the voice of a second user candidate)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use voice recognition algorithm to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner”[0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 2, Nawana-Paul teaches all of the limitations of claim 1. Nawana also teaches:
wherein the processor is further configured to determine the specific condition based on at least one of information registered before a start of the surgery or information regarding a checking operation before the start of the surgery ([0165] “The pre-op module 202 can generally provide users of the system 10 with an interface for planning surgery, e.g., … operating room (OR) pre-op planning… The pre-op module 202 can also facilitate logistical surgical planning such as… personnel scheduling.” is the processor registering personnel participating in the surgery to determine the predetermined condition)
Regarding claim 4, Nawana-Paul teaches all of the limitations of claim 2. Paul also teaches:
wherein the information registered before the start of the surgery includes information regarding the first user candidate (see “user login information” and “provide real-time biometric identification information” above, where the biometric information is a voice pattern and user login is satisfying a predetermined condition that is registered before the start of the surgery)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use voice recognition algorithm to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner” [0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 5, Nawana-Paul teaches all of the limitations of claim 4. Nawana also teaches:
wherein the information regarding the first user candidate includes correspondence information in which information regarding a feature of an operator is associated with information regarding a feature of a surgeon. ([0271] “the personnel tracking module 236 can be configured to gather data regarding medical personnel present in the OR. The personnel tracking module 236 can therefore be configured to determine which and how many medical personnel are present in the OR at any given time and how long each of the medical personnel are present in the OR” where determining which medical personnel are in the facility is associating an operator with information regarding the feature of a surgeon)
Regarding claim 6, Nawana-Paul teaches all of the limitations of claim 2. Paul also teaches:
wherein the information regarding the checking operation before the start of the surgery includes information regarding an operation authority including a plurality of categories, ([0034] In still another aspect, the present system may include facial and/or voice recognition to distinguish between different speakers (obviously, due to face-covering personal protective screens, masks, and other devices, facial recognition capabilities would likely be less effective than voice recognition). The capability to identify users would enable an entire surgical team to use a single voice-enabled system, which could modify actions based on the speaker. The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict” where priority denotes the operation authority of a user”)
and the processor is further configured to allocate the first user candidate who has performed the checking operation before the start of the surgery ([Paul 0035] “a type of ‘access control’ is used so only a particular user can issue certain commands. This could be implemented during a user login procedure at an input device prior to the user entering a room or facility that has been configured with the present invention.” and “the user may be required to provide real-time biometric identification information, such as presenting a fingerprint or voice pattern” where the login information [i.e., a first user is registered in advanced to satisfy a predetermined condition] in the imaging region [i.e., a room that has been configured with the present information] is satisfied via a voice [i.e., a voice pattern]; see also “display Monitors” above for the display of information, where access control implemented during the login procedure is allocating operation authority of certain commands to a user before the start of the surgery) into the plurality of categories of the operation authority ([Nawana 0152] “The care provider database 314 can include data regarding a plurality of medical practitioners, each of the medical practitioners being associated with at least one geographic location, at least one area of medical practice, at least one previously performed surgical procedure, and/or at least one previously provided medical treatment” where a surgeon’s area of medical practice is one of many categories associated with the provider’s operation authority)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use voice recognition algorithm to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner” [0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 10, Nawana-Paul teaches all of the limitations of claim 2. Paul also teaches:
wherein, in a case where the plurality of user candidates satisfies the specific condition, the processor is further configured to set a priority order of an operation authority for each of the plurality of user candidates. ([0034] “The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict” where the prioritized commands are a priority order of the operation authority for multiple user candidates)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use voice recognition algorithm to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner”[0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 11, Nawana-Paul teaches all of the limitations of claim 10. Paul also teaches:
wherein the processor is further configured to update the priority order of the operation authority based on a situation of the surgery with details of the information registered before the start of the surgery as an initial state. (See Paul’s “access control” above which allocates certain commands of authority to a user; see also [Paul 0026] “a decision support database… could include associations between inputs to the system and actions to be taken by a particular device, instrument, system or subsystem in response… The decision support database could also include mappings to Evidence-Based Medicine guidelines, thereby providing the user with the ability to determine real-time optimal treatments for patients undergoing surgery. The guidelines could include National Comprehensive Cancer Network guidelines, Surgical Specialty guidelines, Cochrane databases, and third-party health-related decision systems built on machine learning models.” Where the database of guidelines and decision systems provide an updated priority order of the necessary surgeon according to specifications based on a situation of the surgery)
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use a voice recognition algorithm and recommendation software to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner”[0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 12, Nawana-Paul teaches all of the limitations of claim 10. Paul also teaches:
wherein the processor is further configured to update the priority order of the operation authority based on an operation performed by a third user candidate of the plurality of user candidates that has a specific authority. ([0026] “a decision support database… could include associations between inputs to the system and actions to be taken by a particular device, instrument, system or subsystem in response… The decision support database could also include mappings to Evidence-Based Medicine guidelines, thereby providing the user with the ability to determine real-time optimal treatments for patients undergoing surgery. The guidelines could include National Comprehensive Cancer Network guidelines, Surgical Specialty guidelines, Cochrane databases, and third-party health-related decision systems built on machine learning models.” Where the decision support database [i.e., comprising a processor] determine real-time optimal treatments based on surgical specialty guidelines [i.e., updates the priority order of operation authority]; see also [0034] “The capability to identify users would enable an entire surgical team to use a single voice-enabled system, which could modify actions based on the speaker. The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict.” Where authority changes for an operation performed are based on different speakers [i.e., the first, second, third, and a plurality of remaining user candidates])
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use a voice recognition algorithm and recommendation software to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner”[0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 15, Nawana-Paul teaches all of the limitations of claim 2. Paul also teaches:
wherein the processor is further configured to recognize a fourth user candidate of a plurality of user candidates that is different from the first user candidate and the second user candidate and exclude the fourth user candidate from the plurality of user candidates in a case where the fourth user candidate does not satisfy the specific condition ([0034] “the present system may include facial and/or voice recognition to distinguish between different speakers… The capability to identify users would enable an entire surgical team to use a single voice-enabled system, which could modify actions based on the speaker. The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict.” Where distinguishing different speakers [i.e., the first, second, and remaining users] are distinguished [i.e., performs recognizing processing] to prioritize commands [i.e., excludes the user candidate])
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Nawana with the teachings of Paul, with a reasonable expectation of success, by coordinating Nawana’s voice recognition algorithm to explicitly prioritize surgeon authority in the operating room with temporal, distinguishable, and transferrable ownership of voice controls. This would have streamlined every operators interactions with the device, saving time and minimizing added complexity in an operating to ensure safe procedure. Paul is adaptable to Nawana as both inventions use voice recognition algorithm to integrate medical devices into an operating room system. Paul teaches, “there has not been a technology platform capable of making operating room systems fully integrated in an intuitive, user-friendly manner”[0008]. Nawana would have found Paul’s teaching while researching solutions to integrate operating room systems into a user-friendly manner.
Regarding claim 16, Nawana-Paul teaches all of the limitations of claim 1. Nawana also teaches:
wherein the input triggered by the voice includes a line-of-sight of one of the first user candidate or the second user candidate in a case where the one of the first user candidate or the second user candidate issues a voice command. ([0209] “Examples of no-touch commands include motion sensing … voice recognition (e.g., speaking a certain key term or key phrase to cause a certain display to be shown on the user interface), “ and “No-touch controls can also be used in any combination, e.g., looking at a specific location in a spatial field room and speaking a specific activation word” where performing no-touch controls [i.e., an input triggered by the voice] includes a line of sight [i.e., motion sensing] and a specific activation word [i.e., a voice command])
Regarding claim 17, Nawana-Paul teaches all of the limitations of claim 16. Nawana also teaches:
wherein the processor is further configured to execute a specific process corresponding to the voice command ([0202] “[0202] As mentioned above, the operation module 204 can be configured to read information from and to write information to the operation database 304. The operation database 304 can include a products and procedures database 320, an OR database 322, an equipment database 324, and an ethnography Database” where reading and writing information to the databases is predetermined processing) based on a line-of-sight position of the one of the first user candidate or the second user candidate on the display screen. ([0209] “No-touch controls can also be used in any combination, e.g., looking at a specific location in a spatial field room and speaking a specific activation word” where the operation module performs no-touch controls [i.e., executes predetermined processing] to look at a specific spatial field [i.e., a line-of-sight position] and speaking a specific activation word [i.e., using a voice command] so a user [i.e., a first or second user candidate] can execute predetermined processing)
Regarding claim 18, Nawana-Paul teaches all of the limitations of claim 17. Nawana also teaches:
the processor is further configured to detect the line-of- sight of the one of the first user candidate or the second user candidate within the imaging region ([0210] “The motion sensing system can include a projector 82 and an infrared (IR) and RGB camera” where the IR camera is a line-of-sight detection unit within the imaging region to detect any number of users in the sensors range)
Regarding claim 19, Nawana teaches:
A control method, comprising: ([0002] The present disclosure relates generally to systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking”)
in a case where a medical display system controls display of display information on a display screen based on information output from a medical device ([0205] “FIG. 16 illustrates a user interface 80 showing instrument information in the form of a list of instruments used in the surgical procedure” where the user interface [i.e., the display] outputs information on the list of instruments [i.e., a medical device] used in a surgical procedure)
Regarding claim 19, Nawana does not explicitly teach, as taught by Paul:
recognizing a first user candidate from a plurality of user candidates present in a region from which at least the display screen is viewable; ([Figure 8] voice-enabled digital conversational assistant system (800); see also [0035] “the present system may include facial and/or voice recognition to distinguish between different speakers” where facial and voice recognition recognize a user candidate)
controlling, in a case where the first user candidate satisfies a specific predetermined condition in an imaging region that is imaged as a part of the region, the display information based on one of a voice of the first user candidate acquired in the region or an input triggered by the voice of the first user candidate, wherein ([0034] “The capability to identify users would enable an entire surgical team to use a single voice-enabled system, which could modify actions based on the speaker. The digital assistant could prioritize commands based on identified speakers such that commands by Speaker A might take priority over commands by Speaker B if they conflict.” Where the digital assistant recognizes a voice [i.e., satisfies a first condition in the imaging region base on a voice of the first user] to modify actions based on a speaker [i.e., control the display information]; see also [Paul 0035] “the present system may include facial and/or voice recognition to distinguish between different speakers” where facial and voice recognition recognize a user candidate; see also [[Paul 0016] In every operating room, there are dozens of tasks and functions performed by various devices, instruments, systems, and subsystems that could readily be carried out using voice commands…. Display monitors” where display systems are controlled)
the recognition is based on the imaging region and the voice of the first user candidate, ([Paul 0035] “the present system may include facial and/or voice recognition to distinguish between different speakers” where facial and voice recognition recognize a user candidate)
the specific condition is based on information regarding a specific situation in a surgery, ([0020] “In other medical facilities, some of the above devices, instruments, systems, and subsystems might be used, but others could be more commonly found in other situations, such as devices, instruments, systems, and subsystems used in an emergency room, pediatrics suite, radiology suite, etc. Thus, the above Table 2 is not exhaustive of the types of commands, instructions, comments, and statements, responses, and states/conditions one might expect in different facilities or areas within a hospital or other healthcare facility.” where the data