DETAILED ACTION
The present office action represents a nonfinal action on the merits.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 6, 2026 has been entered.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application claims the priority date of foreign application CN202010421197.1 of May 18, 2020 and 371 of PCT/CN2021/090029 of April 26, 2021.
Status of Claims
Claims 1-2, 14, and 20 are amended and claims 1-20 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1-13 are drawn to an information processing method, which is within the four statutory categories (i.e., process). Claims 14-19 are drawn to an information processing method, which is within the four statutory categories (i.e., process). Claim 20 is drawn to an information processing method, which is within the four statutory categories (i.e., process).
Claims 1-13 recite an information processing method, comprising:
receiving first information associated with an operation behavior of a medical testing device, the first information being associated with data collection performed by the medical testing device during operation, the medical testing device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a speed of a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by an operator of the endoscope, the operation comprising the manual operation wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical testing device during the operation, from a recommended trajectory of the medical testing device during the operation; and
outputting at least part of the first information by controlling a display device to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, at least a representation of the deviation degree and a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body.
Claims 14-19 recite an information processing method, comprising:
receiving, at a terminal device, first identification information associated with a medical device;
obtaining second identification information of an operator associated with the terminal device;
associating the medical device with the operator based on the first identification information and the second identification information;
while the medical device is associated with the operator, receiving first information associated with an operation behavior of the medical device, the first information being associated with data collection performed by the medical device during operation, the medical device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a speed of a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by the operator, and the operation comprising the manual operation, wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical device during the operation, from a recommended trajectory of the medical device during the operation; and
outputting at least part of the first information by controlling a display device to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, at least a representation of the deviation degree and a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body.
Claim 20 recites an information processing method, comprising:
receiving, at a terminal device, first indication information from a user, the first indication information indicating at least one operation performed by a medical device;
receiving second indication information from the user, the second indication information indicating an operator of the at least one operation;
associating the indicated at least one operation with the indicated operator; and
while the medical device is associated with the operator, receiving first information associated with an operation behavior of the medical device, the first information being associated with data collection performed by the medical device during operation, the medical device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a speed of a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by the operator, and the operation comprising the manual operation, wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical device during the operation, from a recommended trajectory of the medical device during the operation; and
outputting at least part of the first information by controlling a display device to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, at least a representation of the deviation degree and a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body.
The bolded limitations, given the broadest reasonable interpretation, cover a certain method of organizing human activity because it recites fundamental economic practices, commercial or legal interactions, and/or managing personal behavior or relationships or interactions between people – a process that is performed by an operator after comparing the actual data to ideal data, but for the recitation of generic computer components (e.g., in this case receiving information associated with an operation behavior of a medical testing device). The underlined limitations are not part of the identified abstract idea (the method of organizing human activity) and are deemed “additional elements,” and will be discussed in further detail below.
Dependent claims 2-13 and 15-19 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. These limitations only serve to further limit the abstract idea (or contain the same additional elements found in the independent claim), and hence are nonetheless directed towards fundamentally the same abstract idea as independent claims 1, 14, and 20.
The dependent claims include additional limitations, but these only serve to further limit the abstract idea, and hence are nonetheless directed towards fundamentally the same abstract idea as independent claims 1, 14, and 20.
The additional elements from claim include:
a medical testing device (generally linking, MPEP 2106.05(h)).
The additional elements from claims 1, 14, and 20 include
an endoscope (generally linking, MPEP 2106.05(h)).
a display device (apply it, MPEP 2106.05(f)).
a display (apply it, MPEP 2106.05(f)).
The additional elements from claims 14 and 20 include:
a terminal device (apply it, MPEP 2106.05(f)).
a medical device (generally linking, MPEP 2106.05(h)).
The dependent claims include the following additional elements beyond those recited in the independent claims:
a cloud storage device (apply it, MPEP 2106.05(f)).
multi-view switching display, video display, virtual reality display, augmented reality display, and 3D display (apply it, MPEP 2106.05(f)).
a QR code (generally linking, MPEP 2106.05(h)).
a barcode (generally linking, MPEP 2106.05(h)).
These additional elements, in the independent claims are not integrated into a practical application because the additional elements (i.e., the limitations not identified as part of the abstract idea) amount to no more than limitations which:
amount to mere instructions to apply an exception – for example, the recitation of “a terminal device”, “a cloud storage device”, and “display”, which amounts to merely invoking a computer as a tool to perform the abstract idea e.g., see Specification Paragraphs [0051], [0058], and [0143]-[0145] (See MPEP 2106.05(f)).
generally link the abstract idea to a particular technological environment or field of use (e.g., computers) (See MPEP 2106.05(h)).
Furthermore, the claims do not include additional elements that are sufficient to amount to “significantly more” than the judicial exception because, the additional elements (i.e., the elements other than the abstract idea) amount to no more than limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by:
The Specification discloses that the additional elements are well-understood, routine, and conventional in nature (i.e., the Specification Paragraphs [0051], [0058], and [0143]-[0145] discloses that the additional elements (i.e., a terminal device, a cloud storage device, and display) comprise a plurality of different types of generic computing systems that are configured to perform generic computer functions that are well understood routine, and conventional activities previously known to the pertinent industry (i.e., healthcare);
Relevant court decisions: The following example of court decision demonstrating well understood, routine and conventional activities, e.g., see MPEP 2106.05(d)(II): Receiving medication use data, e.g., see Intellectual Ventures v. Symantec – similarly, the current invention receives operation behavior of a medical testing device.
Dependent claims 2-13 and 15-19 include other limitations, but none of these functions are deemed significantly more than the abstract idea. Thus, taken alone, the additional elements do not amount to “significantly more” than the above identified abstract idea. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves any other technology, and their collective functions merely provide conventional computer implementation.
The application, is an attempt to organize human activity, using methods for information processing information associated with an operation behavior of a medical testing device. The inventive concept is medical quality control, and more particularly, to an information processing method, an electronic device and a computer storage medium, which is not patentable. Therefore, whether taken individually or as an ordered combination, claims 1-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hares (U.S. Pub. No. 2020/0110936 A1) in view of Frimer (U.S. Pub. No. 2014/0194896 A1) and Davidson (U.S. Pub. No. 2015/0313445 A1).
Regarding claim 1, Hares discloses an information processing method, comprising:
receiving first information associated with an operation behavior of a medical testing device, the first information being associated with data collection performed by the medical testing device during operation, the medical testing device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by an operator of the endoscope, the operation comprising the manual operation, wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical testing device during the operation, from a recommended trajectory of the medical testing device during the operation (Paragraphs [0002], [0004], [0007]-[0008], [0012], and [0107] discuss the endoscope mounts to the end of the robot arm and is attachable to and detachable from the robot arm via the robot arm and endoscope interfaces, the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and the images captured by the endoscope (which may be collectively referred to herein as the endoscope video) being used during surgery, the images captured by the endoscope may be recorded and subsequently used for a variety of purposes such as, but not limited to, learning and/or teaching surgical procedures, and assessing and/or reviewing the performance of the surgeon and methods and systems for automatically augmenting an endoscope video for a task performed by a surgical robot system based on status data that describes the status of the surgical robot system during the task and the event detector receives status data and endoscope video captured during surgery; each task has a predefined set of steps and the system may detect when there has been a deviation from the expected set of steps (e.g. when a step not in the list of expected steps is performed, or when a step in the list of expected steps is performed out of order).); and
outputting at least part of the first information to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, a representation of the deviation degree and of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body (Paragraphs [0002]-[0004], [0007]-[0008], [0062], [0107]-[0110] and FIG. 2 discuss an operator console with a screen and the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and in response to detecting, from the status data, that an event occurred during the task (e.g. surgery) the event detector may be configured to generate an output indicating that an event has been detected, the type of event, the time of the event and optionally the duration of the event and the operator console comprises a display used to display a video stream of the surgical site to help guide surgeon’s tools; each task has a predefined set of steps and the system may detect when there has been a deviation from the expected set of steps (e.g. when a step not in the list of expected steps is performed, or when a step in the list of expected steps is performed out of order), and detect a patient event when the patient health information or data indicates that one or more of the patient's health metrics (e.g. vital signs) have fallen outside a range, and further identify the level of the different health metrics just prior to a negative outcome occurring and generate the ranges from the identified levels.).
Hares does not explicitly disclose:
at least a speed of a manual operation of the endoscope, and
by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ.
Frimer teaches:
at least a speed of a manual operation of the endoscope (Paragraphs [0662]-[0663] discuss speed of the tip of the endoscope is varied.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, at least a speed of a manual operation of the endoscope, as taught by Frimer, in order to improve the interface between the surgeon and the operating medical assistant or between the surgeon and an endoscope system for laparoscopic surgery. (Frimer Paragraph [0001]).
Davidson teaches:
by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ (Paragraphs [0009], [0108] and FIG. 4E discuss captured images are sent to a control unit coupled with the endoscope via one of the channels present in the flexible tube, for being displayed on a screen coupled with the control unit and reference frame overlaid with the captured images as shown in FIG. 4E, is displayed on a screen coupled with the endoscope, enabling the operating physician to visualize the regions of the colon that have been scanned.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, as taught by Davidson, in order to enable an operating physician to scan a body cavity efficiently without missing any region therein and ensure an endoscopic scan with a complete and uniform coverage of the body cavity being scanned that provides high quality scanning images, of a body cavity being endoscopically scanned, that may be analyzed, tagged, marked and stored for comparisons with corresponding scanned images of the body cavity obtained at a later point in time. (Davidson Paragraph [0012).
Regarding claim 2, Hares discloses wherein receiving the first information comprises receiving at least one of the following:
a set of locations of the medical testing device during the operation (Paragraph [0009] discusses the motion and/or position of the endoscope during surgery can be monitored and logged.);
sequence information associated with the set of locations (Paragraph [0030] discusses status data may comprise surgical robot position information describing a movement and/or a position of the at least one surgical robot during the task; for example, the event detector may be configured to identify a suturing step in the task by identifying patterns in the instrument information and the surgical robot position information.);
time information associated with the set of locations (Paragraphs [0057]-[0058] discuss receive status data that indicates or describes the status of the surgical robot system during the robotic task (e.g. surgery) that is time synchronized to the endoscope video captured during the task and allows events detected from the status data to be correlated to a particular time or period of time in the endoscope video and include data relating to the position and movement of the surgical instrument.);
the trajectory of the medical testing device during the operation (Paragraph [0061] discusses the event detector analyses status data to identify events that occurred during surgery and patterns indicate an event has occurred may be predetermined, and can indicate a departure from the expected sequence events.);
the recommended trajectory of the medical testing device during the operation (Paragraphs [0060]-[0061] and [0092] discuss status data may be provided to the event detector in real time while the task (e.g. surgery) is being performed and predetermined events may be automatically selected based on the type of event, the particular operator and/or any other criteria; or the operator or another user may be able to manually enter a set of events that may occur during the task or automatically generate, based on status data for previously performed tasks, high probability events or important events, for example, and it is these high probability events or important events that are presented to the operator (e.g. surgeon).);
speed information of the medical testing device during the operation (Paragraphs [0062], [0076], and [0118] discuss status data detects an event occurred during the surgery and the event detector can output the type of event, the time of the event and duration of the event and the time taken to perform the task.);
smoothness of the medical testing device during the operation (Paragraph [0118] discusses the system determines the performance level of the operator including the smoothness of the movement of the robot arm and/or instrument in performing a task.);
image data collected by the medical testing device during the data collection (Paragraph [0013] discusses endoscope video and status data captured during the task.);
image quality of the image data (Paragraphs [0003] and [0122] discuss display a video stream of the surgical site and if the system detects a lot of low frequency components in the frequency domain this may indicate fatigue or intoxication; whereas lots of high frequency components in the frequency domain may indicate a high level of shakiness or panicked movement..);
an organ image of the organ for which the operation is directed (Paragraphs [0003]-[0004] discusses display visible to a user operating the input devices used to video stream the surgical site in the opening of the body, such as the mouth or nostrils.);
a degree of qualification of a to-be-tested part of the medical testing device to medical testing device examination (Paragraph [0022] discusses instrument events that include a change in instrument attached to a surgical robot arm of the surgical robot system; where at least one of instrument attached to a surgical robot arm of the surgical robot system is an energized instrument, a change in a status of the energized instrument; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, cleaning of the endoscope; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, performing a white balance on an imaging system of the endoscope; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system.);
a score of the operation behavior (Paragraph [0124] discusses determine a single operator performance score.);
statistical information of the operation behavior (Paragraphs [0117]-[0119] and [0124] discuss determine multiple operator performance scores, determine if a performance event has occurred if any of the performance criteria have fallen below acceptable level or if any of the performance criteria change (by, for example, at least a specific amount.) and performance may be ranked and status data in a repository compared to determine the performance level of the operator.);
comparative information of the statistical information (Paragraphs [0117]-[0119] and [0124] discuss determine multiple operator performance scores, determine if a performance event has occurred if any of the performance criteria have fallen below acceptable level or if any of the performance criteria change (by, for example, at least a specific amount.) and performance may be ranked and status data compared to determine the performance level of the operator.);
suggestions of improvement to the operation behavior (Paragraph [0092] discusses automatically generate, based on status data for previously performed tasks, high probability events or important events, for example, and it is these high probability events or important events that are presented to the operator.);
comparison of the different first information for the different operation behavior (Paragraphs [0117]-[0119] and [0124] discuss determine multiple operator performance scores, determine if a performance event has occurred if any of the performance criteria have fallen below acceptable level or if any of the performance criteria change (by, for example, at least a specific amount.) and performance may be ranked and status data in a repository compared to determine the performance level of the operator.); and
the operation behavior determined according to the score and a score threshold (Paragraphs [0117]-[0119] discuss a data repository in which the status data for previously performed tasks is stored and in which the performance of the operator (e.g. surgeon) has been assessed or identified (e.g. the performance may be ranked on a scale such as, but not limited to a scale from 1 to 10); and the status data may be compared to the status data stored in the repository to determine the performance level of the operator (e.g. surgeon) for the task and detect if the performance level falls outside of an acceptable range.).
Regarding claim 3, Hares discloses wherein the statistical information comprises at least one of the following:
a quantity of image data collected by the operation behavior in at least part of the set of locations (Paragraphs [0009], [0063], [0099], and [0122] discuss the position during surgery is monitored and logged and endoscope video captured by an endoscope during the task and detect from the position and/or movement information that the size (i.e. magnitude) or frequency of the endoscope movement has fallen outside a range.);
division of the operation behavior according to different operators of the medical testing device (Paragraphs [0111]-113] and [0124] discuss determine multiple operation performance scores and status data from an operator and ranges may be determined for different operators.); and
division of the operation behavior according to organizations to which different operators of the medical testing device belong (Paragraphs [0067], [0104], [0111]-[0113], and [0117]-[0118] discuss status data for different operators, the operator can be a surgeon or other user, other operating team member; the system compares performance metrics of the operators for previously performed tasks located in hospital or treatment center.).
Regarding claim 4, Hares discloses wherein outputting the at least part of the first information comprises at least one of the following:
outputting at least part of the set of locations in association with the organ image (Paragraphs [0009] and [0065] discuss the motion and position of the endoscope is monitored and logged and output endoscope video relating to an identified event.); and
outputting at least part of the trajectory in association with the organ image (Paragraphs [0060] and [0092] discuss status data may be provided to the event detector in real time while the task (e.g. surgery) is being performed and predetermined events may be automatically selected based on the type of event, the particular operator and/or any other criteria; or the operator or another user may be able to manually enter a set of events that may occur during the task or automatically generate, based on status data for previously performed tasks, high probability events or important events, for example, and it is these high probability events or important events that are presented to the operator (e.g. surgeon) based on the status data.).
Regarding claim 5, Hares discloses wherein outputting at least part of the first information comprises at least one of the following:
outputting the trajectory and the recommended trajectory in association (Paragraphs [0103]-[0104] and [0119] discuss detect task events and the event detector may detect the steps of the task based on patterns detected in the status data that match patterns and analyze the status data related to previously performed tasks of the same type as the current task and generate a map that links the identified steps and analyzing the endoscope video and the path taken can be compared to the best or optimum path.); and
outputting the trajectory and the deviation degree in association (Paragraph [0119] discusses outcome data about the surgery and comparing the path taken and the closer the path taken is to the best path the better the performance and the further away the path is from the best the poorer the performance.).
Regarding claim 6, Hares discloses wherein receiving the first information comprises:
receiving the first information from a cloud storage device, the first information being transmitted from a data analysis device of the medical testing device to the cloud storage device, and the first information being determined based on input data of the medical testing device (Paragraphs [0060] and [0064] discuss event detector receive status data generated by the surgical robot system via wireless or wired communication connection from a storage device, for example endoscope may provide the endoscope video to the augmenter via wireless communication and the video may be stored in a storage device and the augmenter can read the video from the storage medium.).
Regarding claim 7, Hares discloses wherein outputting the at least part of the first information comprises:
receiving an input instruction for the first information (Paragraphs [0013], [0061], and [0092] discuss identify one or more patterns in the status data that indicate an event occurred during the task; and an augmenter receive an endoscope video captured during the task, the endoscope video being time synchronized with the status data; the event detector can analyze the status data to identify events, for example, a departure from the expected sequence events (e.g. a cutting step was performed when a suture step was expected) and the system may generate important events presented to the operator.); and
determining the part of the first information in response to the input instruction (Paragraph [0013] discusses identify one or more patterns in the status data that indicate an event occurred during the task.).
Regarding claim 8, Hares discloses wherein the input instruction further comprises a screening instruction, and the screening instruction comprises screening conditions comprising at least one of the following:
an operator identification of an operator associated with an operation behavior of the medical testing device, an organization identification of an organization to which an operator of the medical testing device belongs, a date the operation behavior occurred, a time period during which the operation behavior occurred, a score threshold of scoring to the operation behavior, a degree of qualification threshold of a degree of qualification of a to-be- tested part of the medical testing device to medical testing device examination, a statistical information threshold of statistical information of the operation behavior and an image quality threshold (Paragraphs [0022], [0057], [0088], [0091], [0104], [0116]-[0118], [0124], [0131], FIGS. 2 and 9 discuss a user can log in or authenticate themselves and view events based on the privileges of the user; an operator console and status data may describe the health and movement of the operator during the task, information that indicates the users of the system and the surgeon or other members of the surgical team; the event detector detects performance events that relate to the performance of the operator and if it has fallen below an acceptable level, the time taken to perform the task, determine an operator performance score, the status data may be compared to status data stored in the repository and the endoscope video and status data are timestamped; further, identify instrument events, such as a change in instrument attached to a surgical robot arm, a change in status of an energized instrument, cleaning of the instrument, performing a white balance on an imaging system, identifying a size or frequency of movement falling outsize a range.).
Regarding claim 9, Hares discloses wherein outputting the at least part of the first information comprises one of the following:
displaying the at least part of the first information (Paragraph and FIG. 9 discuss operator console display of status data.); and
printing the at least part of the first information (Paragraphs [0062] and [0148] discuss the input/output controller may output status data to printing device.).
Regarding claim 10, Hares discloses wherein outputting the at least part of the first information comprises:
displaying the at least part of the first information in at least one of the following display manners (Paragraph and FIG. 9 discuss operator console display of status data.):
multi-view switching display, video display, virtual reality display, augmented reality display, and 3D display (Paragraphs [0003], [0140], and FIGS. 2 and 9 discuss generate 3D virtual reality reconstruction of instruments and their positions and movement, and endoscope video can be displayed.).
Regarding claim 11, Hares discloses wherein outputting the at least part of the first information comprises:
selecting a display manner based on a type of the first information (Paragraphs [0090], [0092], [0138], [0148], and FIG. 9 discuss operator console has a display which displays endoscope video, list of steps in a task, allows operator to select which of steps is being performed and be related to status data, for example, the augmented endoscope video may be augmented so that when the video is played back a banner at the top, bottom or side of the screen displays event information—i.e. it displays information about the detected events at the appropriate or relevant time of the endoscope video; also display information may be provided to a display device such as a printing device.); and
displaying the at least part of the first information in the selected display manner (Paragraph [0138] and [0140] discuss for example, the augmented endoscope video may be augmented so that when the video is played back a banner at the top, bottom or side of the screen displays event information—i.e. it displays information about the detected events at the appropriate or relevant time of the endoscope video; for example, the augmentation system may synchronize and link the augmented endoscope video with the 3D VR reconstruction of the instruments so that when the endoscope video is viewed or played the 3D VR reconstruction can be viewed alongside the augmented endoscope video.).
Regarding claim 12, Hares discloses further comprising:
receiving a first input indicating another display manner that is different from a current display manner of the at least part of the first information (Paragraphs [0137]-[0140] discuss the information or markers that are viewable when an augmented endoscope video is viewed or played using, for example, a media player, and in addition the system may generate a 3D VR reconstruction of the instruments and their positions and movement and if selected the system may then synchronize and link the augmented endoscope video with the 3D VR reconstruction of the instruments so that when the endoscope video is viewed or played the 3D VR reconstruction can be viewed alongside the augmented endoscope video.); and
displaying the at least part of the first information in the other display manner (Paragraph [0140] discusses the system may generate a 3D VR reconstruction of the instruments and their positions and movement and if selected the system may then synchronize and link the augmented endoscope video with the 3D VR reconstruction of the instruments so that when the endoscope video is viewed or played the 3D VR reconstruction can be viewed alongside the augmented endoscope video rather than just viewing the endoscope video.).
Regarding claim 13, Hares discloses further comprising:
receiving a second input indicating to-be-displayed information (Paragraphs [0065], [0068] discuss the control unit receives inputs from an operator console including first and second hand controllers, foot pedal(s), voice recognition, gesture recognition, eye recognition, etc., from the surgical robots, sensor data from position sensors and torques sensors located on the robot arm joints, force feedback, data from or about the surgical instruments etc. and the user may select one of the identified events and the system may output the portion of the endoscope vide relating to the identified event.); and
outputting the first information that matches the to-be-displayed information and not displayed in the first information (Paragraph [0065] discusses in response to the event detector determining from the status data that an event occurred during the task, add information (e.g. a marker) to the endoscope video that identifies the event that occurred and when the event occurred, and optionally identifies the duration of the event, to generate an augmented endoscope video and the information or marker may not be visible to a user when the augmented endoscope video is subsequently played or viewed in a video player. For example, in some cases the information or marker may be detectable by a special system or software and present the user with a list of identified events. If a user selects one of the identified events the system may present or output the portion of the endoscope video relating to the identified event.).
Regarding claim 14, Hares discloses an information processing method, comprising:
receiving, at a terminal device, first identification information associated with a medical device (Paragraph [0022] and FIG. 2 discuss and operator console with a screen and identify instrument events that include the one or more instrument events may comprise one or more of: a change in instrument attached to a surgical robot arm of the surgical robot system; where at least one of instrument attached to a surgical robot arm of the surgical robot system is an energized instrument, a change in a status of the energized instrument; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, cleaning of the endoscope; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, performing a white balance on an imaging system of the endoscope; where at least one instrument attached to a surgical robot arm of the surgical robot system is an endoscope, a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system.);
obtaining second identification information of an operator associated with the terminal device (Paragraph [0131] discusses to generate an augmented endoscope video based on the status data and a user may have to log in and the events that are detected by the system are based on the privileges of the user.);
associating the medical device with the operator based on the first identification information and the second identification information (Paragraph [0131] discusses a user being able to manually select which events the augmentation system (e.g. event detector) is to detect, the augmentation system (e.g. event detector) may be configured to automatically select the events that are to be detected based on the user that is logged into the augmentation system, for example, in some cases, to generate an augmented endoscope video based on the status data a user may have to log in or otherwise authenticate themselves with the augmentation system and the events that are detected by the augmentation system (e.g. event detector) are based on, for example, the privileges of the user using the endoscope. For example, the operator that performed the task may be able to access more events than another operator (e.g. surgeon).);
while the medical device is associated with the operator, receiving first information associated with an operation behavior of the medical device, the first information being associated with data collection performed by the medical device during operation, the medical device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by the operator, and the operation comprising the manual operation, wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical device during the operation, from a recommended trajectory of the medical device during the operation (Paragraphs [0002], [0004], [0007]-[0008], [0012], and [0107] discuss the endoscope mounts to the end of the robot arm and is attachable to and detachable from the robot arm via the robot arm and endoscope interfaces, the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and the images captured by the endoscope (which may be collectively referred to herein as the endoscope video) being used during surgery, the images captured by the endoscope may be recorded and subsequently used for a variety of purposes such as, but not limited to, learning and/or teaching surgical procedures, and assessing and/or reviewing the performance of the surgeon and methods and systems for automatically augmenting an endoscope video for a task performed by a surgical robot system based on status data that describes the status of the surgical robot system during the task and the event detector receives status data and endoscope video captured during surgery; each task has a predefined set of steps and the system may detect when there has been a deviation from the expected set of steps (e.g. when a step not in the list of expected steps is performed, or when a step in the list of expected steps is performed out of order).).); and
outputting at least part of the first information to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, a representation of the deviation degree and of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body (Paragraphs [0002]-[0004], [0007]-[0008], [0062], [0107]-[0110] and FIG. 2 discuss an operator console with a screen and the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and in response to detecting, from the status data, that an event occurred during the task (e.g. surgery) the event detector may be configured to generate an output indicating that an event has been detected, the type of event, the time of the event and optionally the duration of the event and the operator console comprises a display used to display a video stream of the surgical site to help guide surgeon’s tools; each task has a predefined set of steps and the system may detect when there has been a deviation from the expected set of steps (e.g. when a step not in the list of expected steps is performed, or when a step in the list of expected steps is performed out of order), and detect a patient event when the patient health information or data indicates that one or more of the patient's health metrics (e.g. vital signs) have fallen outside a range, and further identify the level of the different health metrics just prior to a negative outcome occurring and generate the ranges from the identified levels.).
Hares does not explicitly disclose:
at least a speed of a manual operation of the endoscope.
by controlling a display device to a display, at least a representation of the deviation degree and a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ.
Frimer teaches:
at least a speed of a manual operation of the endoscope (Paragraphs [0662]-[0663] discuss speed of the tip of the endoscope is varied.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, at least a speed of a manual operation of the endoscope, as taught by Frimer, in order to improve the interface between the surgeon and the operating medical assistant or between the surgeon and an endoscope system for laparoscopic surgery. (Frimer Paragraph [0001]).
Davidson teaches:
by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ (Paragraphs [0009], [0108] and FIG. 4E discuss captured images are sent to a control unit coupled with the endoscope via one of the channels present in the flexible tube, for being displayed on a screen coupled with the control unit and reference frame overlaid with the captured images as shown in FIG. 4E, is displayed on a screen coupled with the endoscope, enabling the operating physician to visualize the regions of the colon that have been scanned.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, as taught by Davidson, in order to enable an operating physician to scan a body cavity efficiently without missing any region therein and ensure an endoscopic scan with a complete and uniform coverage of the body cavity being scanned that provides high quality scanning images, of a body cavity being endoscopically scanned, that may be analyzed, tagged, marked and stored for comparisons with corresponding scanned images of the body cavity obtained at a later point in time. (Davidson Paragraph [0012).
Regarding claim 15, Hares discloses further comprising:
associating an operation performed by the operator through the medical device with the operator (Paragraph [0131] discusses a user being able to manually select which events the augmentation system (e.g. event detector) is to detect, the augmentation system (e.g. event detector) may be configured to automatically select the events that are to be detected based on the user that is logged into the augmentation system, for example, in some cases, to generate an augmented endoscope video based on the status data a user may have to log in or otherwise authenticate themselves with the augmentation system and the events that are detected by the augmentation system (e.g. event detector) are based on, for example, the privileges of the user using the endoscope. For example, the operator that performed the task may be able to access more events than another operator (e.g. surgeon).).
Regarding claim 16, Hares discloses wherein receiving the first identification information comprises receiving the identification information by at least one of the following:
receiving an input of a device identification corresponding to the identification information (Paragraph [0079] discusses means for automatically detecting the type of instrument (e.g. each instrument may comprise an RFID or other component which is configured to automatically provide information on its identity to the surgical robot system when it is attached to a robot arm) and this information may be provided to the event detector as status data.);
receiving an audio input corresponding to the identification information (Paragraph [0059] discusses status data may be generated by audio, for example, a camera captures audio in the room in which surgery is performed.);
receiving a video input corresponding to the identification information (Paragraph [0059] discusses status data may be generated by video, for example, a camera captures video in the room in which surgery is performed.).; and
receiving a tactile input corresponding to the identification information (Paragraph [0059] discusses status data may be generated for example, from patient smart bed.).
Regarding claim 17, Hares discloses wherein the second identification information comprises at least one of the following:
a name of the operator, a username of the operator, a telephone number of the operator, an image associated with the operator, a job title of the operator (Paragraph [0131] and FIGS. 2 and 9 discuss an operator console controlled by an operator, for example, a surgeon, to generate an augmented endoscope video based on the status data and a user will log in or otherwise authenticate themselves and the events that are detected are based on the privileges of the user.).
Regarding claim 18, Hares discloses further comprising:
if determining to cease use of the medical device, disassociating the medical device from the operator (Paragraph [0022] discusses identify instrument events that include the one or more instrument events may comprise one or more of: a change in instrument attached to a surgical robot arm of the surgical robot system; where at least one of instrument attached to a surgical robot arm of the surgical robot system is an energized instrument, a change in a status of the energized instrument; cleaning of the endoscope; performing a white balance on an imaging system of the endoscope; a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system.).
Regarding claim 19, Hares discloses further comprising determining to cease use of the medical device according to at least one of (Paragraph [0022] discusses patterns to identify instrument events and change in an instrument being actively controlled by the system.):
time duration that the terminal device associated with the medical device exceeding a threshold length (Paragraphs [0022] and [0100] discuss status and patterns to identify instrument events, including, a change in instrument attached to a surgical robot arm of the surgical robot system; a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system.);
the medical device entering a dormant state (Paragraphs [0022] and [0110] discuss patterns to identify instrument events, including, a change in instrument attached to a surgical robot arm of the surgical robot system; where at least one of instrument attached to a surgical robot arm of the surgical robot system is an energized instrument, a change in a status of the energized instrument; cleaning of the endoscope; performing a white balance on an imaging system of the endoscope; a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system; detecting active instrument changes.); and
the medical device entering a shutdown state (Paragraph [0022] discusses patterns to identify instrument events, including, a change in instrument attached to a surgical robot arm of the surgical robot system; where at least one of instrument attached to a surgical robot arm of the surgical robot system is an energized instrument, a change in a status of the energized instrument; cleaning of the endoscope; performing a white balance on an imaging system of the endoscope; a size or frequency of movement of the endoscope falling outside a range; and a change in an instrument being actively controlled by the surgical robot system.).
Regarding claim 20, Hares discloses an information processing method, comprising:
receiving, at a terminal device, first indication information from a user, the first indication information indicating at least one operation performed by a medical device (Paragraphs [0025], [0068], [0131] and FIGS. 2 and 9 discuss an operator console controlled by an operator and a surgical system with a control unit receives inputs from an operator console and the surgical robots, the inputs include sensor data from position sensors on the robot.);
receiving second indication information from the user, the second indication information indicating an operator of the at least one operation (Paragraph [0131] and FIGS. 2 and 9 discuss an operator console controlled by an operator, for example, a surgeon, to generate an augmented endoscope video based on the status data and a user will log in or otherwise authenticate themselves and the events that are detected are based on the privileges of the user.);
associating the indicated at least one operation with the indicated operator (Paragraph [0131] and FIGS. 2 and 9 discuss an operator console controlled by an operator, for example, a surgeon, to generate an augmented endoscope video based on the status data and a user will log in or otherwise authenticate themselves and the events that are detected are based on the privileges of the user.); and
while the medical device is associated with the operator, receiving first information associated with an operation behavior of the medical device, the first information being associated with data collection performed by the medical device during operation, the medical device being an endoscope, the first information being at least partly received from the endoscope while in operation within a human body, and the first information representing at least a manual operation of the endoscope within the human body and whether the manual operation of the endoscope resulted in an imaging of each of predetermined key sites within the human body, the manual operation being implemented by the operator, and the operation comprising the manual operation, wherein receiving the first information comprises receiving at least a deviation degree of a trajectory, of the medical device during the operation, from a recommended trajectory of the medical device during the operation (Paragraphs [0002], [0004], [0007]-[0008], [0012] discuss the endoscope mounts to the end of the robot arm and is attachable to and detachable from the robot arm via the robot arm and endoscope interfaces, the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and the images captured by the endoscope (which may be collectively referred to herein as the endoscope video) being used during surgery, the images captured by the endoscope may be recorded and subsequently used for a variety of purposes such as, but not limited to, learning and/or teaching surgical procedures, and assessing and/or reviewing the performance of the surgeon and methods and systems for automatically augmenting an endoscope video for a task performed by a surgical robot system based on status data that describes the status of the surgical robot system during the task and the event detector receives status data and endoscope video captured during surgery.); and
outputting at least part of the first information to a display, to the operator and while the operator is at least partly manually operating the endoscope within the human body, a representation of the deviation degree and of whether the predetermined key sites have been imaged by the endoscope during the manual operation within the human body (Paragraphs [0002]-[0004], [0007]-[0008], [0062], [0107]-[0110] and FIG. 2 discuss an operator console with a screen and the endoscope is operable independently of the robot arm in its detached state and can be operated manually by a member of the operating room staff when detached from the robot arm and it penetrates the body of the patient at a port so as to access the surgical site or view of the relevant area and in response to detecting, from the status data, that an event occurred during the task (e.g. surgery) the event detector may be configured to generate an output indicating that an event has been detected, the type of event, the time of the event and optionally the duration of the event and the operator console comprises a display used to display a video stream of the surgical site to help guide surgeon’s tools; each task has a predefined set of steps and the system may detect when there has been a deviation from the expected set of steps (e.g. when a step not in the list of expected steps is performed, or when a step in the list of expected steps is performed out of order), and detect a patient event when the patient health information or data indicates that one or more of the patient's health metrics (e.g. vital signs) have fallen outside a range, and further identify the level of the different health metrics just prior to a negative outcome occurring and generate the ranges from the identified levels.).
Hares does not explicitly disclose:
at least a speed of a manual operation of the endoscope.
by controlling a display device to a display, at least a representation of the deviation degree and a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ.
Frimer teaches:
at least a speed of a manual operation of the endoscope (Paragraphs [0662]-[0663] discuss speed of the tip of the endoscope is varied.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, at least a speed of a manual operation of the endoscope, as taught by Frimer, in order to improve the interface between the surgeon and the operating medical assistant or between the surgeon and an endoscope system for laparoscopic surgery. (Frimer Paragraph [0001]).
Davidson teaches:
by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ (Paragraphs [0009], [0108] and FIG. 4E discuss captured images are sent to a control unit coupled with the endoscope via one of the channels present in the flexible tube, for being displayed on a screen coupled with the control unit and reference frame overlaid with the captured images as shown in FIG. 4E, is displayed on a screen coupled with the endoscope, enabling the operating physician to visualize the regions of the colon that have been scanned.).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Hares to include, by controlling a display device to a display, at least a representation of an internal organ of the human body with indications, overlaid over the representation of the internal organ, as taught by Davidson, in order to enable an operating physician to scan a body cavity efficiently without missing any region therein and ensure an endoscopic scan with a complete and uniform coverage of the body cavity being scanned that provides high quality scanning images, of a body cavity being endoscopically scanned, that may be analyzed, tagged, marked and stored for comparisons with corresponding scanned images of the body cavity obtained at a later point in time. (Davidson Paragraph [0012).
Response to Arguments
Applicant’s arguments filed February 6, 2025 have been fully considered.
Rejections under 35 U.S.C. 112:
Applicant’s amendments overcome the previous 35 U.S.C. 112 rejection. Examiner withdraws the previous rejection.
Rejections under 35 U.S.C. 101:
With respect to claim 1 and the Prong 1 35 U.S.C. 101 rejection, Applicant’s amendment fails to overcome the previous rejection. Claim 1 as amended recites an abstract idea, certain methods of organizing human activity. See MPEP 2106.04(a)(2)(II)(C) Managing Personal Behavior or Relationships or Interactions Between People.
Applicant argues, “this rejection has, in principle, the same material errors as made by the USPTO Appeal Board in Ex Parte Desjardins, which is precedential and therefore binding on the Examiner. That is, like noted at pages 15 and 16 of the previous Amendment in this application, the claims in this application satisfy the criteria of MPEP 2106.04(d)(1) for finding patent eligibility under 35 USC101 - the specification describes a technical problem and solution, and the claims reflect that solution.” (Remarks, pages 11-12). Examiner respectfully disagrees. As recited above, the claims cover a certain method of organizing human activity, but for the recitation of generic computer components (e.g., in this case receiving information associated with an operation behavior of a medical testing device). Further, the Application is distinguishable from Ex Parte Desjardins. The Application relates to the field of medical quality control and to provide medical assistance to guide operation of endoscopy by performing quality control on operation behaviors. There is no technical problem, the Application is comparing the actual data to ideal data, which is part of the abstract idea.
While practical application is a way to overcome the Prong 2 35 U.S.C. 101 rejection, here, claim 1 fails to integrate the recited judicial exception into a practical application. Applicant states, "A claim reciting a judicial exception is not directed to the judicial exception if it also recites additional elements demonstrating that the claim as a whole integrates the exception into a practical application. One way to demonstrate such integration is when the claimed invention improves the functioning of a computer or improves another technology or technical field. The application or use of the judicial exception in this manner meaningfully limits the claim by going beyond generally linking the use of the judicial exception to a particular technological environment, and thus transforms a claim into patent-eligible subject matter.” (Remarks, page 12). Examiner respectfully disagrees. Examiner maintains that the improvement is to the abstract idea. Medical quality control, receiving information associated with an operation behavior of a medical testing device, associated with data collection performed by the medical testing device during operation, is not a technical problem rooted in the technology and is directed to the abstract idea. As indicated in the above rejection, the additional elements recited in the claims are recited at apply it level, and are merely used as tools to implement the abstract idea. There is no improvement to the functioning of a computer and there is no improvement to another technology. Applicant states, “But regardless of Enfish as in that ARP, the criteria of MPEP 2106.04(d)(1) as above are met also in this application. The specification in this application describes an improvement and the claims reflect that improvement. The rejection here acquiesces as such, but just instead responded that the improvement is to an abstract idea - which shows the rejection to be clearly improper as highlighted above in the precedential Ex Parte Desjardins which, as being precedential is binding on this Examiner and should be taken as reasons to withdraw the rejection.” (Remarks, page 17). The Specification and the Application relate to medical quality control and are comparing the actual data to ideal data, which is part of the abstract idea. As indicated in the above rejection, the additional elements, do not result in a practical application as it is recited at an apply it level, and are merely used as tools to implement the abstract idea. As such, they are not improved by the claimed invention.
Regarding Step 2A Prong 1, Applicant states, “the claims here, at Step 2A Prong 1, are not "directed to" the rejection's asserted new grounds of "Managing Personal Behavior or Relationships or Interactions Between People" as, viewing MPEP 2106.04(a)(2)(III)(C), the claims here are not like the claims in the examples given for such sub-grouping by the MPEP, and so, the rejection's assertions at Step 2A Prong 1 fail again to heed the "certain" qualifier of the "certain methods of organizing human activity" as described above by now improperly expanding the meaning of "Managing Personal Behavior or Relationships or Interactions Between People"… For example, even if sometimes providing information could be equated to "passing a note to a person who is in the middle of a meeting or conversation" like with the USPTO's characterization of Interval Licensing at MPEP 2106.04(a)(2)(III)(C), such reasoning does not apply here because such note could not be passed without the involved computer technology, alleged "additional elements" by this rejection (be those elements generic or not, arguendo) as, for example, since the content of such "note" in this case would be information that the note- passer could not have obtained without those "additional elements" as relating to endoscopy, which, for clarity, regards inside of the human body and to otherwise invisible aspects of the human body.” (Remarks, pages 17-18). Examiner respectfully disagrees. The 35 USC 101 patent eligibility of the pending claims is not self-evident according to the criteria of MPEP 2106.06(a). The Application relates to the field of medical quality control and to provide medical assistance to guide operation of endoscopy by performing quality control on operation behaviors, which is part of the abstract idea.
Regarding Step 2B, Applicant asserts, “it cannot be said that the claim features represent the generic- ness asserted throughout this rejection. For example, although 35 USC 103 considerations may be irrelevant to whether a claim feature is directed to an abstract idea at Step 2A Prong 1, such 35 USC 103 considerations are relevant here.” (Remarks, page 20). The Application is receiving data associated with an operation behavior of a medical testing device and perform quality control on operation behaviors, this is not a technical improvement. As provided in the above analysis, the additional elements, as written, do not result in a practical application or significantly more than the abstract idea itself, as they are recited at an apply it level and merely generally link to a particular field of use. For the reasons stated above, claims 14 and 20 similarly fail to overcome the 35 U.S.C. 101 rejection. Here, there is no improvement to the medical testing device, endoscope, or any of the devices.
Rejections under 35 U.S.C. 103:
Applicant’s amendments overcome the previous rejection. Applicant’s arguments are well taken. Examiner withdraws the previous rejection in light of the amendments. Applicant’s arguments with respect to claim 1 have been considered and the Examiner’s rejection has been updated to address Applicant’s claim 1, 14 and 20 amendments.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAWN TRINAH HAYNES whose telephone number is (571)270-5994. The examiner can normally be reached M-F 7:30-5:15PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Dunham can be reached on (571)272-8109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAWN T. HAYNES/
Art Unit 3686
/RACHELLE L REICHERT/Primary Examiner, Art Unit 3686