DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Applicants Remarks pages 6-10, filed 10/13/25, with respect to the rejection(s) of claim(s) 1, 6, 9, 11-13, 18 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Barral et al US 20190110856.
Regarding 35 USC 101, Applicant has amended independent claims 1, 13 and 18 to include a one or more processors for executing computer readable instructions stored in a non-transitory computer readable memory device and a display device where the indicators are selectable by a user through a user interface to provide additional information to the user about a corresponding variation (Applicants Remarks page 6). Examiner agrees with Applicant and the 35 USC 101 rejection is withdrawn.
Regarding claim 1, 13 and 18, Examiner initially indicated that claims 2, 14 and 19 are objected and would be allowable if written into independent form. Upon further search and consideration, these claim limitations are rejected as taught in Barral et al. Barral et al teaches The surgeon may select a bookmark to jump to the corresponding portion of the video and watch the bookmarked video segment. The markers are unlabeled by default, but if the surgeon hovers a cursor over a marker, the associated surgical step will be displayed. The surgeon also has the option of displaying a list of the bookmarks in the video and corresponding information, such as the name of the surgical step and the timestamp at which it begins within the video (paragraph 0034) and display(s) 111 may include one or more touch-sensitive displays (user interface) capable of receiving touch inputs and providing touch input signals to the computing device 107, which may be employed to select options on the touch-sensitive display or to perform gestures on the touch-sensitive display (paragraph 0042). computing device 107 annotates the surgical video with annotations (indicators) to identify each of the segments. surgical video may then be output to display 111 with the annotations (indicators). This may allow the viewer of the surgical video to quickly identify and view (selectable) the relevant portions (corresponding variation) of the video. (paragraph 0043). In other words, the surgical video is presented to a user on a touch sensitive display to allow the user look at or interact with the annotations on the video
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-5, 7-13, 15, 16 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Venkataraman et al US 2019/0362834 in view of Kim et al US 20200310349 further in view of Barral et al US 20190110856.
Regarding claim 1, Venkataraman et al teaches a system for visualizing variations in performance of a surgical procedure, the system comprising one or more processors for executing computer readable instructions stored in a non-transitory computer readable memory device (one or more processors embedded therein or coupled thereto, or any other sort of computing device. Such a computer system includes various types of computer readable media and interfaces for various other types of computer readable media (paragraph 0203 and 0212), the computer readable instructions controlling the one or more processors upon execution by the one or more processors to perform operations comprising:
receiving a plurality of surgical videos (process 100 for establishing machine learning targets in preparation for mining surgical data from surgical videos of a given surgical procedure (paragraph 0033), wherein each of the plurality of surgical videos capture a workflow of a same type of surgical procedure that is segmented into a segmented workflow comprising segments (clinical needs includes segmenting a surgical video of the surgical procedure into a set of phases (workflow). In some embodiments, to meet the clinical need of segmenting a surgical video into a set of phases, the process first defines a set of phases for the surgical procedure. each phase (workflow) in the set of predefined phases represents a particular stage of the surgical procedure that serves a unique and distinguishable purpose in the entire surgical procedure (paragraph 0034);
analyzing the plurality of segmented workflows to determine a standard workflow in the plurality of surgical videos (the set of predefined phases can be used to partition the intraoperative surgical video, which can be a rather long video, into a set of shorter video segments, and each video segment corresponds to a particular stage of the surgical procedure which is distinguishable from other video segments corresponding to other stages of the surgical procedure (paragraph 0034);
Venkataraman et al fails to teach aligning the plurality of the segmented workflows to the standard workflow to identify variations from the standard workflow in the plurality of surgical videos
Kim et al 20200310349 teaches aligning the plurality of the segmented workflows to the standard workflow to identify variations from the standard workflow in the plurality of surgical videos (digitally calculating and generating a reference hologram from the obtained object hologram, extracting an object phase of the object hologram and an object phase of the reference hologram, and calculating a difference between the object phases (paragraph 0027). obtaining an object hologram of a measurement target object; extracting each of a first phase (segmented workflow) information of the object hologram and a second phase information (standard workflow) of the calculated digital reference hologram; calculating a phase information difference (identify variations) from the first phase information of the object hologram and the second phase information of the calculated digital reference hologram; (paragraph 0030); and
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al to include aligning the plurality of the segmented workflows to the standard workflow to identify variations from the standard workflow in the plurality of surgical videos.
The reason of doing so would be to clearly identify any errors in the video accurately.
Venkataraman et al in view of Kim et al fails to teach outputting, to a display device, a visualization of the standard workflow and indicators of the variations
Barral et al 20190110856 outputting, to a display device, a visualization of the standard workflow and indicators of the variations (ML technique may detect an interesting feature in a video frame. Interesting features may include a video frame recognized by an ML technique as indicating a new step of a surgical procedure, a new sub-step of a surgical procedure, or a recognized event within the surgical procedure (paragraph 0046). For example, if a bleeding event is detected (indicator of variation), These frames may be processed to potentially capture other events or sub-steps occurring during the event (standard workflow) (paragraph 0047) detected events may also be indicated in real-time on the display 211 as additional information for the surgeon. For example, a bleeding event may be visually indicated, such as with a textual display or with a graphical overlay on the detected bleeding (paragraph 0048).
wherein, the indicators are selectable by a user through a user interface to provide additional information to the user about a corresponding variation ( The surgeon may select a bookmark to jump to the corresponding portion of the video and watch the bookmarked video segment. The markers (indicators) are unlabeled by default, but if the surgeon hovers a cursor over a marker (selectable), the associated surgical step will be displayed. The surgeon also has the option of displaying a list of the bookmarks in the video and corresponding information, such as the name of the surgical step and the timestamp at which it begins within the video (corresponding variation) (paragraph 0034). Note: display(s) 111 may include one or more touch-sensitive displays (user interface) capable of receiving touch inputs and providing touch input signals to the computing device 107, which may be employed to select options on the touch-sensitive display or to perform gestures on the touch-sensitive display (paragraph 0042). computing device 107 annotates the surgical video with annotations (indicators) to identify each of the segments. surgical video may then be output to display 111 with the annotations (indicators). This may allow the viewer of the surgical video to quickly identify and view (selectable) the relevant portions (corresponding variation) of the video. (paragraph 0043). Note: the surgical video is presented to a user on a touch sensitive display to allow the user look at or interact with the annotations on the video
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include outputting, to a display device, a visualization of the standard workflow and indicators of the variations, wherein, the indicators are selectable by a user through a user interface to provide additional information to the user about a corresponding variation.
The reason of doing so would be to allow a user to clearly view video images.
Regarding claim 3, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the additional information includes a number of the plurality of surgical videos having the corresponding variation (Venkataraman et al: mining surgical data from surgical videos of a given surgical procedure (abstract and paragraph 0033).
Regarding claim 4, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the additional information includes at least a portion of a segmented workflow of the plurality of segmented workflows having the corresponding variation (Venkataraman et al: surgical video analysis system additionally uses the established phases to break down surgical videos of the given surgical procedure into shorter video segments and uses the identified machine learning targets to label/tag these video segments into different categories of descriptors including surgical phases, surgical sub-phases or tasks, surgical tools, anatomies, complications, and tips and tricks (paragraph 0032).
Regarding claim 5, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the visualization is a compressed alignment visualization (Venkataraman et al: universal technique for breaking down surgical case videos (also referred to as “surgical videos,” “surgical procedure videos,” or “procedure videos” hereinafter) of any given surgical procedure into a set of manageable machine learning targets and subsequently establishing associative relationships among these machine learning targets to identify machine learning classifiers (paragraph 0030) Note: the surgical videos are compressed and categorized into set of machine learning classifiers for clinical needs and surgical procedures
Regarding 7, Venkataraman et al in view of Kim et al further in view of Barral et al wherein a type of a variation is based on a location of an indicator in the visualization relative to the standard workflow (Barral et al: begin” and “end” tags include timestamps within the video and may be used to identify specific frames in the video 471 associated with the bookmark. The “begin” tag 626 indicates the location of the bookmark 622a within the video 471 and the location to display a visual indicator of the bookmark 422a on a video timeline 412 (workflow) (paragraph 0068).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include wherein a type of a variation is based on a location of an indicator in the visualization relative to the standard workflow.
The reason of doing so would be to allow a user to clearly view video images.
Regarding claim 8, Venkataraman et al in view of Kim et al further in view of Barral et al wherein the type is one of an additional segment, a different segment, or an eliminated segment relative to the standard workflow (Barral et al: the video segments may also be indexed for searching based on their source video, bookmark information associated with each (e.g., steps, sub-steps, events, etc.), and the surgeon or surgeons that performed the surgical procedure (paragraph 0036).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include wherein the type is one of an additional segment, a different segment, or an eliminated segment relative to the standard workflow.
The reason of doing so would be to allow a user to select desired videos.
Regarding claim 9, Venkataraman et al in view of Kim et al fails to teach wherein only a subset of one or more of the segments, the variations, or the plurality of surgical videos are included in the visualization output to the display device
Barral et al teaches wherein only a subset of one or more of the segments, the variations, or the plurality of surgical videos are included in the visualization output to the display device (these frames may be processed to potentially capture other events or sub-steps occurring during the event (paragraph 0047).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include wherein only a subset of one or more of the segments, the variations, or the plurality of surgical videos are included in the visualization output to the display device.
The reason of doing so would be to allow a user to clearly view video images.
Regarding claim 10, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the subset is selected by a user (Barral et al: The surgeon may select a bookmark to jump to the corresponding portion of the video and watch the bookmarked video segment. The markers (indicators) are unlabeled by default, but if the surgeon hovers a cursor over a marker (subset), the associated surgical step will be displayed. The surgeon also has the option of displaying a list of the bookmarks in the video and corresponding information, such as the name of the surgical step and the timestamp at which it begins within the video (corresponding variation) (paragraph 0034).
Regarding claim 11, Venkataraman et al teaches wherein the operations further comprise segmenting each of the plurality of surgical videos into the segmented workflows (clinical needs includes segmenting a surgical video of the surgical procedure into a set of phases (workflow). In some embodiments, to meet the clinical need of segmenting a surgical video into a set of phases, the process first defines a set of phases for the surgical procedure. each phase (workflow) in the set of predefined phases represents a particular stage of the surgical procedure that serves a unique and distinguishable purpose in the entire surgical procedure (paragraph 0034).
Regarding claim 12, Venkataraman et al teaches wherein each of the segments are associated with a surgical phase (clinical needs includes segmenting a surgical video of the surgical procedure into a set of phases (workflow). In some embodiments, to meet the clinical need of segmenting a surgical video into a set of phases, the process first defines a set of phases for the surgical procedure. each phase (workflow) in the set of predefined phases represents a particular stage of the surgical procedure that serves a unique and distinguishable purpose in the entire surgical procedure (paragraph 0034)
Regarding claim 13, Venkataraman et al teaches a computer-implemented method for visualizing variations in performance of a surgical procedure, the method comprising:
Receiving, by a system comprising one or more processors executing computer readable instructions stored in a non-transitory computer readable memory device, a plurality of surgical videos (process 100 for establishing machine learning targets in preparation for mining surgical data from surgical videos of a given surgical procedure (paragraph 0033) Such a computer system includes various types of computer readable media and interfaces for various other types of computer readable media (paragraph 0203 and 0212), wherein each of the plurality of surgical videos capture a workflow of a same type of a surgical procedure that is segmented into a segmented workflow comprising segments (clinical needs includes segmenting a surgical video of the surgical procedure into a set of phases (workflow). In some embodiments, to meet the clinical need of segmenting a surgical video into a set of phases, the process first defines a set of phases for the surgical procedure. each phase (workflow) in the set of predefined phases represents a particular stage of the surgical procedure that serves a unique and distinguishable purpose in the entire surgical procedure (paragraph 0034);
analyzing the plurality of segmented workflows to determine a standard workflow in the plurality of surgical videos (the set of predefined phases can be used to partition the intraoperative surgical video, which can be a rather long video, into a set of shorter video segments, and each video segment corresponds to a particular stage of the surgical procedure which is distinguishable from other video segments corresponding to other stages of the surgical procedure (paragraph 0034); and
Venkataraman et al fails to teach determining variations in the segmented workflows in the plurality of surgical videos, the determining comprising:
aligning the plurality of the segmented workflows to the standard workflow to identify the variations from the standard workflow in the plurality of surgical videos
Kim et al teaches determining, by the one or more processors executing computer readable instructions, variations in the segmented workflows in the plurality of surgical videos (calculating a phase information difference (identify variations) from the first phase information of the object hologram and the second phase information of the calculated digital reference hologram; (paragraph 0030), the determining comprising:
aligning the plurality of the segmented workflows to the standard workflow to identify the variations from the standard workflow in the plurality of surgical videos (obtaining an object hologram of a measurement target object; extracting each of a first phase (segmented workflow) information of the object hologram and a second phase information (standard workflow) of the calculated digital reference hologram; calculating a phase information difference (identify variations) from the first phase information of the object hologram and the second phase information of the calculated digital reference hologram; (paragraph 0030); and
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al to include determining variations in the segmented workflows in the plurality of surgical videos, the determining comprising: aligning the plurality of the segmented workflows to the standard workflow to identify the variations from the standard workflow in the plurality of surgical videos.
The reason of doing so would be to clearly identify any errors in the video accurately.
Venkataraman et al in view of Kim et al fails to teach outputting, to a display device, a visualization of the variations
receiving user input via a user interface of the visualization;
in response to the user input, outputting to the display device, a second visualization that includes additional information describing the variations
Barral et al teaches outputting, to a display device, a visualization of the variations (ML technique may detect an interesting feature in a video frame. Interesting features may include a video frame recognized by an ML technique as indicating a new step of a surgical procedure, a new sub-step of a surgical procedure, or a recognized event within the surgical procedure (paragraph 0046). For example, if a bleeding event is detected (indicator of variation), These frames may be processed to potentially capture other events or sub-steps occurring during the event (standard workflow) (paragraph 0047) detected events may also be indicated in real-time on the display 211 as additional information for the surgeon. For example, a bleeding event may be visually indicated, such as with a textual display or with a graphical overlay on the detected bleeding (paragraph 0048).
receiving user input via a user interface of the visualization (When the surgeon first accesses (user input) the newly bookmarked video, she is presented with an interface that devotes part of the screen to the video itself (visualization) (paragraph 0034); and
in response to the user input, outputting to the display device, a second visualization that includes additional information describing the variations ( The surgeon may select a bookmark to jump to the corresponding portion of the video and watch the bookmarked video segment. The markers (indicators) are unlabeled by default, but if the surgeon hovers a cursor over a marker (selectable), the associated surgical step will be displayed. The surgeon also has the option of displaying a list of the bookmarks in the video and corresponding information, such as the name of the surgical step and the timestamp at which it begins within the video (second visualization) (paragraph 0034). Note: the surgical video is presented to a user on a touch sensitive display to allow the user look at or interact with the annotations on the video
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include outputting, to a display device, a visualization of the variations; receiving user input via a user interface of the visualization; in response to the user input, outputting to the display device, a second visualization that includes additional information describing the variations
The reason of doing so would be to allow a user to clearly view video images.
Regarding claim 15, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the additional information includes a portion of a segmented workflow having at least one of the variations (Venkataraman et al: surgical video analysis system additionally uses the established phases to break down surgical videos of the given surgical procedure into shorter video segments and uses the identified machine learning targets to label/tag these video segments into different categories of descriptors including surgical phases, surgical sub-phases or tasks, surgical tools, anatomies, complications, and tips and tricks (paragraph 0032).
Regarding claim 16, Venkataraman et al in view of Kim et al further in view of Barral et al teaches comprising applying a filter to the visualization, the filter causing one or more of the visualization to include only a subset of the segments, the visualization to include only a subset of the variations, or the visualization to include only a subset of the plurality of surgical videos (Venkataraman et al: surgical video analysis system additionally uses the established phases to break down surgical videos of the given surgical procedure into shorter video segments and uses the identified machine learning targets to label/tag these video segments into different categories of descriptors including surgical phases, surgical sub-phases or tasks, surgical tools, anatomies, complications, and tips and tricks (paragraph 0032).
.
Regarding claim 18, Venkataraman et al teaches a computer program product comprising a non-transitory computer readable memory device having computer-executable instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform operations (one or more processors embedded therein or coupled thereto, or any other sort of computing device. Such a computer system includes various types of computer readable media and interfaces for various other types of computer readable media (paragraph 0203 and 0212) comprising:
the visualizing comprising:
receiving a standard workflow of a plurality of surgical videos of a same type of surgical procedure (process 100 for establishing machine learning targets in preparation for mining surgical data from surgical videos of a given surgical procedure (paragraph 0033);
receiving workflows of the plurality of surgical videos (clinical needs includes segmenting a surgical video of the surgical procedure into a set of phases (workflow). In some embodiments, to meet the clinical need of segmenting a surgical video into a set of phases, the process first defines a set of phases for the surgical procedure. each phase (workflow) in the set of predefined phases represents a particular stage of the surgical procedure that serves a unique and distinguishable purpose in the entire surgical procedure (paragraph 0034); and
Venkataraman et al fails to teach visualizing variations in performance of a surgical procedure;
Kim et al teaches visualizing variations in performance of a surgical procedure (digitally calculating and generating a reference hologram from the obtained object hologram, extracting an object phase of the object hologram and an object phase of the reference hologram, and calculating a difference between the object phases (paragraph 0027). obtaining an object hologram of a measurement target object; extracting each of a first phase (segmented workflow) information of the object hologram and a second phase information (standard workflow) of the calculated digital reference hologram; calculating a phase information difference (identify variations) from the first phase information of the object hologram and the second phase information of the calculated digital reference hologram; (paragraph 0030),
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al to include visualizing variations in performance of a surgical procedure.
The reason of doing so would be to clearly identify any errors in the video accurately.
Venkataraman et al in view of Kim et al fails to teach outputting, to a display device, a visualization of variations between the standard workflow and one or more workflows of the plurality of surgical videos;
receiving user input via a user interface of the visualization;
in response to the user input, outputting, to the display device , a second visualization of one or more of the variations between the standard workflow and one or more workflows of the plurality of surgical videos;
Barral et al teaches outputting, to a display device, a visualization of variations between the standard workflow and one or more workflows of the plurality of surgical videos (ML technique may detect an interesting feature in a video frame. Interesting features may include a video frame recognized by an ML technique as indicating a new step of a surgical procedure, a new sub-step of a surgical procedure, or a recognized event within the surgical procedure (paragraph 0046). For example, if a bleeding event is detected (indicator of variation), These frames may be processed to potentially capture other events or sub-steps occurring during the event (standard workflow) (paragraph 0047) detected events may also be indicated in real-time on the display 211 as additional information for the surgeon. For example, a bleeding event may be visually indicated, such as with a textual display or with a graphical overlay on the detected bleeding (paragraph 0048).
receiving user input via a user interface of the visualization (When the surgeon first accesses (user input) the newly bookmarked video, she is presented with an interface that devotes part of the screen to the video itself (visualization) (paragraph 0034);
in response to the user input, outputting, to the display device , a second visualization of one or more of the variations between the standard workflow and one or more workflows of the plurality of surgical videos ( The surgeon may select a bookmark to jump to the corresponding portion of the video and watch the bookmarked video segment. The markers (indicators) are unlabeled by default, but if the surgeon hovers a cursor over a marker (selectable), the associated surgical step will be displayed. The surgeon also has the option of displaying a list of the bookmarks in the video and corresponding information, such as the name of the surgical step and the timestamp at which it begins within the video (second visualization) (paragraph 0034). the surgeon can select (in response to the user input) one or more segments of video by selecting the corresponding bookmarks and selecting an option to extract the segment(s) of video (variations between the standard workflow and one or more workflows of the plurality of surgical videos). (paragraph 0036) Note: the surgical videos are presented to a user on a touch sensitive display to allow the user look at or interact with the annotations on the video. The videos are time stamped thereby having different
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include outputting, to a display device, a visualization of variations between the standard workflow and one or more workflows of the plurality of surgical videos; outputting, to a display device, a visualization of variations between the standard workflow and one or more workflows of the plurality of surgical videos; in response to the user input, outputting, to the display device , a second visualization of one or more of the variations between the standard workflow and one or more workflows of the plurality of surgical videos.
The reason of doing so would be to clearly identify any errors in the video accurately.
Regarding claim 20, Venkataraman et al in view of Kim et al further in view of Barral et al teaches wherein the second visualization depicts a subset of one or more of the workflows, the variations, or the plurality of surgical videos. (Barral et al: the system 100 employs standardized surgery types and subtypes, steps, sub-steps, and events (paragraph 0066)
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al to include wherein the second visualization depicts a subset of one or more of the workflows, the variations, or the plurality of surgical videos.
The reason of doing so would be to allow a user to clearly view video images.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Venkataraman et al US 2019/0362834 in view of Kim et al US 20200310349 further in view of Barral et al US 20190110856 further in view of Aaltonen US 20060230056.
Regarding claim 6, Venkataraman et al in view of Kim et al further in view of Barral et al teaches all of the limitations of claim 1
Venkataraman et al in view of Kim et al further in view of Barral et al fails to teach wherein the visualization is a route visualization
Aaltonen teaches wherein the visualization is a route visualization (route visualization phase 510 can be made dependent on and be performed in connection with or after specification phase 512 where on the basis of the user-defined route the target elements for metadata operation are specified (paragraph 0040).
Therefore, it would have been obvious to one of ordinary skill in the art to modify Venkataraman et al in view of Kim et al further in view of Barral et al to include wherein the visualization is a route visualization.
The reason of doing so would be to allow a user to clearly view video images.
Allowable Subject Matter
Claims 17 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Michael Burleson
Patent Examiner
Art Unit 2681
Michael Burleson
January 23, 2026
/MICHAEL BURLESON/
/AKWASI M SARPONG/SPE, Art Unit 2681 1/26/2026