DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered.
Status of Claims
In the amendment filed 12/17/2025 the following occurred: Claims 1, 8, and 15 were amended; Claim 3 was canceled; and Claim 22 was added as new. Claims 1-2 and 4-22 are presented for examination.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 15-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 15-23 are drawn to methods and a system, which is/are statutory categories of invention (Step 1: YES).
Independent claim 15 recites detecting, at a healthcare location, a beginning of an imaging procedure; generating unique identifiers for images in an image stream produced during the procedure; analyzing the images using the unique identifiers to determine an order of the images within the image stream; detecting, at the healthcare location, an end of the imaging procedure based on an ending event; and in response to detecting the end of the imaging procedure, incrementing a procedure count associated with the healthcare location based on characteristics of the imaging procedure.
The respective dependent claims 16-22, but for the inclusion of the additional elements specifically addressed below, provide recitations further limiting the invention of the independent claim(s).
The recited limitations, as drafted, under their broadest reasonable interpretation, cover certain methods of organizing human activity, as reflected in the specification, which states that the invention is to “real time analysis of medical images” (see: specification paragraph 2). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. The present claims cover certain methods of organizing human activity because, though “[r]eal-time medical image processing (e.g., computer annotation of images as an imaging procedure is ongoing) is often very helpful to providers”, a problem exists where “real-time image processing is often underutilized or impossible due to expensive equipment and lag when using, for example, remote computing resources to conduct image processing” (see: specification paragraph 3). Further, the claims represent the management between a patient and provider/physician/customer/user because they address annotating images in situations where, “in various imaging modalities, a provider interprets images as the images are collected to make decisions relating to disease detection, diagnosis, therapy, and/or navigation. Accordingly, real time image processing and analysis may assist physicians in making such decisions” (see: specification paragraph 16). Accordingly, the claims recite an abstract idea(s) (Step 2A Prong One: YES).
This judicial exception is not integrated into a practical application. The claims are abstract but for the inclusion of the additional elements including “based on a button activation at the image processing system…by the image processing system…at a cloud location…” (claim 15), “at an imaging device used…” (claim 16), “local compute resources collocated with an imaging device used…” (claim 17), “an imaging device used in the imaging procedure is inside of a patient's body” (claim 19), and “at one of the healthcare location or at a cloud location…” (claim 21), which are additional elements that are recited at a high level of generality (e.g., the “button activation at the image processing system” is configured though no more than a statement than that a function is begun “based on” said activation; the “image processing system” is configured though no more than a statement than that functions are performed “by” said system; the “cloud location” computing environment perform functions through no more than a statement than that analysis is performed “at” said cloud location; the is configured though no more than a statement than that; and similarly, the “local compute resources” and “imaging device” are configured though no more than a statement than that they are being “used” or are “collocated”) such that they amount to no more than mere instruction to apply the exception using generic computer elements. See: MPEP 2106.05(f).
The combination of these additional elements is no more than mere instructions to apply the exception using generic computer elements. Accordingly, even in combination, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract idea(s). Accordingly, the claims are directed to an abstract idea(s) (Step 2A Prong Two: NO).
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea(s) into a practical application, using the additional elements to perform the abstract idea(s) amounts to no more than mere instructions to apply the exception using generic elements. Mere instructions to apply an exception using generic elements cannot provide an inventive concept. See MPEP 2106.05(f).
Further, the claimed additional elements, identified above, are not sufficient to amount to significantly more than the judicial exception because they are generic elements that are configured to perform well-understood, routine, and conventional activities previously known to the industry. See: MPEP 2106.05(d). Said additional elements are recited at a high level of generality and provide conventional functions that do not add meaningful limits to practicing the abstract idea(s). The originally filed specification supports this conclusion at:
Paragraph 20, where “The imaging device 102 may be a medical imaging device configured to collect images of patients (either interior or exterior images), which may be used for diagnostic purposes. The imaging device 102 may, in some examples, be a device for collecting images which may be streamed to the image processing service 112, ultrasound images, endoscopy images, laparoscopic images, fluoroscopy images, and the like. In various examples, the imaging device 102 may be a device for collecting images which may be stored and/or recorded before being sent to the image processing service 112. For example, the imaging device 102 may be an magnetic resonance imaging (MRI) machine, computed tomography (CT) machine, X-Ray machine, and the like...”
Paragraph 21, where “The local computer 104 may be a desktop computer, laptop computer, local server, or the like. Generally, the local computer 104 is connected to the network 108, the imaging device 102, and the display 106. For example, the local computer 104 may receive image data (e.g., image streams) from the imaging device 102, anonymize, compress, and/or encrypt the image data, send the image data to the image processing service 112 over the network 108, receive processed image data from the image processing service 112 over the network 108, and present the processed image data to a user (e.g., a physician) via the display 106. The display 106 may, in some examples, be a part of the local computer 104. In some examples, the display 106 may be part of, or connected to, the imaging device 102. In various examples, the display 106 may be a liquid crystal display (LCD), light emitting diode (LED) display, touchscreen, and the like.”
Paragraph 23, where “The computing environment 110 may be a cloud computing environment, including computing resources in several physical locations. Accordingly, the image processing service 112 may, in various examples, be composed of various microservices distributed across various compute resources. In some examples, the computing environment 110 may include computing resources in multiple cloud environments, such as proprietary cloud environments (e.g., dedicated servers), shared or public cloud environments, etc.”
Paragraph 24, where “In various implementations, the computing environment 110 may include or utilize one or more hosts or combinations of compute resources, which may be located, for example, at one or more servers, cloud computing platforms, computing clusters, and the like. Generally, the image processing service 112 is implemented by the computing environment 110, which includes compute resources including hardware for memory and one or more processors. For example, the computing environment 110 may utilize or include one or more processors, such as a CPU, GPU, and/or programmable or configurable logic.”
Paragraph 25, where “In some embodiments, various components of the computing environment 110 may be distributed across various computing resources, such that the components of the computing environment 110 communicate with one another through the network 108 or using other communications protocols. For example, in some embodiments, the computing environment 110 may be implemented as a serverless service, where computing resources for various components of the computing environment 110 may be located across various computing environments (e.g., cloud platforms) and may be reallocated dynamically and automatically according to resource usage of the image processing service 112. In various implementations, the image processing service 112 may be implemented using organizational processing constructs such as functions implemented by worker elements allocated with compute resources, containers, virtual machines, and the like.”
Paragraph 31, where “In some examples, compression may be performed using an autoencoder network. For example, an autoencoder may utilize an encoder and decoder, where the encoder compresses input data to reduce its dimensionality and the decoder attempts to reconstruct the original data from the compressed data. The autoencoder may use residual blocks, where each of the residual blocks includes a convolutional layer, a batch normalization layer, and an activation layer followed by another convolutional layer and batch normalization layer. For each residual block, the output of the block is concatenated with the input and passed onto the next layer. In some examples, the autoencoder network includes an encoder with three downscaling units, where each downscaling unit is followed by three residual blocks and a decoder with three upscaling units, where each upscaling unit is followed by three residual blocks. A downscaling unit generally includes a convolutional layer, a batch normalization layer, and a parametric rectified linear unit (ReLU) activation unit. An upscaling unit generally includes a deconvolutional layer, a batch normalization layer, and a parametric ReLU activation unit.”
Paragraph 33, where “In other examples, an autoencoder using encoding layers may perform image compression, and the image processing service 112 may analyze the compressed data directly. For example, compression may be performed by an encoder network at local compute resources 104, and the resulting compressed latent object may be transmitted to the image processing service 112. The image processing service 112 may include components (e.g., detectors) trained to perform analysis directly on the compressed data (e.g., without decompression by a decoder sub-network). The results of such analysis (e.g., computer annotation of the image) may be streamed back to the local compute resources 104.”
Paragraph 45, where “The image processing service 112 may include microservices that analyze compressed image data. Such microservices may include, in various examples, machine learning models trained to detect specific structures, characteristics of images, or the like. In some examples, such models may be trained using lossy data, such that the models can ultimately analyze compressed images. Such lossy compression may retain enough information within images to perform meaningful analysis of the images. Conventionally, image analysis is done on uncompressed images, as image analysis uses as much information as possibly for accuracy. Where the image processing service 112 analyzes compressed image data, the images are compressed and the models are trained such that the compressed images contain enough data for accurate analysis by the image processing service 112. For example, the models may be trained using compressed image data so that the model may be used to analyze compressed image data. Compressed images may have less visual information, such that it may be harder to analyze compressed images. Where a model is trained on uncompressed images and analyze compressed images, performance may be worse (e.g., fewer features may be correctly identified) than a model trained on compressed images to analyze compressed images.”
Paragraph 46, where “FIG. 4 is a schematic diagram of a computing system 200 which may be used to implement various embodiments in the examples described herein. For example, local compute resources 104 may be located at one or several computing systems 200. In various embodiments, computing environment 110 is also implemented by a computing system 200. This disclosure contemplates any suitable number of computing systems 200. For example, a computing system 200 may be a server, a desktop computing system, a mainframe, a mesh of computing systems, a laptop or notebook computing system, a tablet computing system, or a combination of two or more of these. Where appropriate, the computing system 200 may include one or more computing systems; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks.”
Further, the concepts of receiving or transmitting data over a network, such as using the Internet to gather data, and storing and retrieving information in memory have been identified by the courts as well-understood, routine, and conventional activities. See: MPEP 2106.05(d)(II).
Viewing the limitations as an ordered combination, the claims simply instruct the additional elements to implement the concept described above in the identification of abstract idea(s) with routine, conventional activity specified at a high level of generality in a particular technological environment. Hence, the claims as a whole, considering the additional elements individually and as an ordered combination, do not amount to significantly more than the abstract idea(s) (Step 2B: NO).
Dependent claim(s) 16-22, when analyzed as a whole, considering the additional elements individually and/or as an ordered combination, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea(s) without significantly more. These claims fail to remedy the deficiencies of their parent claims above, and are therefore rejected for at least the same rationale as applied to their parent claims above, and incorporated herein.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2018/122506 to Grantcharov in view of U.S. Patent Application Publication 2020/0411173 to Mansi further in view of WO 2015/127432 A1 to Guo.
As per claim 1, Grantcharov teaches a method for an image processing system comprising:
receiving, from an imaging device of an imaging system at a healthcare location, encrypted compressed image data (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 141-143, and 190, is met by lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI);
unencrypting the compressed image data of a biological structural feature of a patient (see: Grantcharov, paragraph 129, 142, 147, 151, 154, and 213, is met by decryption of the data streams, the embedding of encrypt/decrypt keys);
generating unique identifiers for compressed images of the compressed image data, wherein the unique identifiers identify an order of the compressed images within the compressed image data (see: Grantcharov, paragraph 93, 97, 159, 163, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams);
identifying a characteristic of interest in the compressed image data (see: Grantcharov, paragraph 93, 97, 159, 163, 169, and 264, is met by automatic data analysis techniques, which may detect pre-determined "events" that can be tagged and/or time-stamped, where all tagged events may be recorded on a master timeline that represents the entire duration of the procedures);
streaming in real time, to the healthcare location, the computer annotations for outputting on a display, as the imaging session is in progress using the unique identifiers (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real-time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors); and
displaying, on the display, the computer annotations (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real- time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors).
Though Grantcharov teaches annotations intelligent dashboard interface for annotations (see: Grantcharov, paragraph 94), but Grantcharov fails to specifically teach the following limitation met by Mansi as cited:
wherein the characteristic of interest is the biological structural feature of the patient in the compressed images corresponding to the compressed image data (see: Mansi, paragraph 100-103 and 150, is met by identifying to locate, isolate, measure, quantify, and segment a region of one or more images);
annotating by a processor the compressed images of the compressed image data reflecting the identified characteristic of interest to generate computer annotations (see: Mansi, Fig. 2, and paragraph 149-154, is met by annotating, including with measurements, one or more images in the data packet, where the annotations are determined automatically);
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the annotating as taught by Grantcharov to include identifying to locate, isolate, measure, quantify, and segment a region of one or more images, and annotating, including with measurements, one or more images in the data packet automatically, as taught by Mansi, with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
Grantcharov fails to specifically teach displaying the annotations over a local image of the biological structural feature of the patient captured by the imaging system, wherein the local image is separate from the compressed image data; however, Guo teaches overlaying an annotation relative to the object segmented from the background image of the second image, where the second image is a real-time, intra-operative image, and in a cloud computing environment (see: Guo, paragraph 14, 39, 44, and 82).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the real time visual display feeds as taught by Grantcharov and Mansi to include overlaying an annotation relative to the object segmented from the background image of the second image, where the second image is a real-time, intra-operative image, and in a cloud computing environment, as taught by Guo, with the motivation of allowing the surgeon to visualize the area of interest in real-time, which can improve surgical resections (see: Guo, paragraph 24).
As per claim 2, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 1, and further teach:
wherein the compressed image data is lossy image data (see: Mansi, paragraph 89-98, is met by lossy compression of images as well as other preprocessing of images).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the annotating as taught by Grantcharov, Mansi, and Guo to include lossy compression of images as taught by Mansi with the motivation of enable and/or shorten the time of transfer of data between points of the system (see: Mansi, paragraph 89).
As per claim 3, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 1, and further teach:
displaying, at the healthcare location, the computer annotations and image data collected during the imaging session at the healthcare location (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real-time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors).
As per claim 4, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 1, and further teach:
wherein the characteristic of interest is identified using a machine learning model trained using compressed image data (see: Mansi, paragraph 62, 89, and 103-105, is met by performance using any number of machine learning (e.g., deep learning) and any suitable form of machine learning algorithm, where preprocessed lossy compression images enable fast training, and identifying and segmenting an affected region within one or more of the set of instances by a convolutional neural network (CNN) or any other suitable algorithm).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the automatic data analysis detection of events that can be tagged as taught by Grantcharov, Mansi, and Guo to include identifying an affected region within one or more of the set of instances using machine learning trained on lossy compression images as taught by Mansi with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
As per claim 5, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 1, and further teach:
wherein annotating the compressed image data comprises creating the computer annotations, the computer annotations specifying where in the image data the identified characteristic of interest is located (see: Mansi, Fig. 4 and 8, and paragraph 149-154, is met by annotating one or more images in the data packet, where the annotations are determined automatically, annotations including region or location such as one or more visual indicators of labels, text, arrows, highlighted or colored regions, and measurements); and
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the annotating as taught by Grantcharov, Mansi, and Guo to include annotating one or more images in the data packet automatically to include a region or location such as one or more visual indicators of labels, text, arrows, highlighted or colored regions, and measurements as taught by Mansi with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
As per claim 6, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 1, and further teach:
wherein the compressed, encrypted image data is compressed and encrypted by local compute resources collocated with the imaging device used to collect the image data during the imaging session (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 139, 141-143, 190, and 202, is met by encoder locally and directly connected to video capture devices, the encoder for lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI).
As per claim 7, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 6, and further teach:
wherein the compressed, encrypted image data is compressed using an autoencoder network located at least partially at the local compute resources (see: Mansi, paragraph 26, 63, and 65 is met by encoder/decoder network model).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the compression and secure encrypting as taught by Grantcharov, Mansi, and Guo to include an encoder/decoder network model as taught by Mansi with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
As per claim 8, Grantcharov teaches a system comprising:
local computing resources configured to compress image data received from an imaging device, the image data collected during an imaging procedure (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 139, 141-143, 190, and 202, is met by encoder locally and directly connected to video capture devices, the encoder for lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI); and
an image processing service hosted in a cloud computing environment configured to (see: Grantcharov, paragraph 106, 199, and 256-257, is met by distributed computing resources such as cloud computing):
receive, from the imaging device via the local computing resources, compressed image data of biological structural features of a patient (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 141-143, and 190, is met by lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI),
generate unique identifiers for compressed images of the compressed image data, wherein the unique identifiers identify an order of the compressed images within the compressed image data (see: Grantcharov, paragraph 93, 97, 159, 163, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams);
transmit, in real time, to the local computing resources, the computer annotations for display as the imaging procedure is ongoing using the unique identifiers (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real-time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors); and
display, on a display, the computer annotations (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real- time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors).
Though Grantcharov teaches annotations intelligent dashboard interface for annotations (see: Grantcharov, paragraph 94), but Grantcharov fails to specifically teach the following limitation met by Mansi as cited:
analyze the compressed image data to provide computer annotations to the compressed images of the compressed image data, wherein the computer annotations identify the biological structural features of the patient in the compressed images corresponding to the compressed image data (see: Mansi, Fig. 2, and paragraph 100-103 and 149-154, is met by identifying to locate, isolate, measure, quantify, and segment a region of one or more images, and annotating, including with measurements, one or more images in the data packet, where the annotations are determined automatically),
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the annotating as taught by Grantcharov to include identifying to locate, isolate, measure, quantify, and segment a region of one or more images, and annotating, including with measurements, one or more images in the data packet automatically, as taught by Mansi, with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
Grantcharov fails to specifically teach displaying the annotations over a local image of the biological structural features of the patient captured by the imaging device, wherein the local image is separate from the compressed image data; however, Guo teaches overlaying an annotation relative to the object segmented from the background image of the second image, where the second image is a real-time, intra-operative image, and in a cloud computing environment (see: Guo, paragraph 14, 39, 44, and 82).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the real time visual display feeds as taught by Grantcharov and Mansi to include overlaying an annotation relative to the object segmented from the background image of the second image, where the second image is a real-time, intra-operative image, and in a cloud computing environment, as taught by Guo, with the motivation of allowing the surgeon to visualize the area of interest in real-time, which can improve surgical resections (see: Guo, paragraph 24).
As per claim 9, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 8, and further teach:
wherein the local computing resources are further configured to split the image data received from the imaging device into a first stream and a second stream, wherein the compressed image data is created from the first stream of the image data (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 141-143, and 190, is met by lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI).
As per claim 10, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 9, and further teach:
wherein the local computing resources are further configured to display the second stream of image data and the computer annotations together as the imaging procedure is ongoing (see: Grantcharov, paragraph 68, 108, 137, 155-156, 216, 221, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real-time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors).
As per claim 11, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 10, and further teach:
wherein the local computing resources are further configured to display the computer annotations responsive to a determination that the computer annotations were received at the local computing resources before a threshold time after sending the compressed image data to the image processing service (see: Grantcharov, paragraph 68, 108, 155-156, 216, 221, 247, 256, 268, 282, 287, and 302-303, is met by identifying critical events during surgical procedures, real-time visual display of feeds, a real-time tool to assist surgeons and OR teams intraoperatively by reporting events that may lead to conditions of potential errors, and synchronization to ensure latency of less than one-thirteenth of a second).
As per claim 12, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 8, and further teach:
wherein the image processing service includes a machine learning model trained using compressed image data, wherein the compressed images are analyzed using the machine learning model (see: Mansi, paragraph 62, 89, and 103-105, is met by performance using any number of machine learning (e.g., deep learning) and any suitable form of machine learning algorithm, where preprocessed lossy compression images enable fast training, and identifying and segmenting an affected region within one or more of the set of instances by a convolutional neural network (CNN) or any other suitable algorithm).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the automatic data analysis detection of events that can be tagged as taught by Grantcharov, Mansi, and Guo to include identifying an affected region within one or more of the set of instances using machine learning trained on lossy compression images as taught by Mansi with the motivation of drawing the attention to one or more features of the images, help a specialist or other recipient easily and efficiently assess the images, and/or perform any other suitable functions (see: Mansi, paragraph 149).
As per claim 13, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 8, and further teach:
wherein the unique identifiers are first unique identifiers and the image processing service is further configured to generate second unique identifiers for the computer annotations based at least on the unique identifiers of the compressed images, wherein the first unique identifiers and the second unique identifiers are generated at the healthcare location or at a cloud location (see: Grantcharov, paragraph 93, 97, 159, 163, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams).
As per claim 14, Grantcharov, Mansi, and Guo teach the invention as claimed, see discussion of claim 8, and further teach:
wherein the local computing resources are further configured to anonymize the image data, wherein anonymizing the image data comprises removing one or more pieces of identifying information from the image data (see: Grantcharov, paragraph 40, 49, 52, 91, 100, 104, 127-128, 147, and 154, is met by identity anonymization including to images).
Claim(s) 15-17 and 20-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2018/122506 to Grantcharov in view of U.S. Patent Application Publication 2023/0274821 to Oliveira.
As per claim 15, Grantcharov teaches a method for an image processing system comprising:
detecting, at a healthcare location, a beginning based on a button activation at the image processing system (see: Grantcharov, paragraph 93, 97, 137, 159, 163, 223, 253, and 264, is met by start point of recording session and the generation of and tagging with time-stamps to create a timeline or clock for the data streams, activatable by a button touch, and screen interface to control one or more components of the system such as start or stop recording) of an imaging procedure by the image processing system (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 141-143, and 190, is met by lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI);
generating unique identifiers for images in an image stream produced during the procedure (see: Grantcharov, paragraph 93, 97, 159, 163, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams);
analyzing, at a cloud location (see: Grantcharov, paragraph 106, 199, and 256-257, is met by distributed computing resources such as cloud computing), the images using the unique identifiers to determine an order of the images within the image stream (see: Grantcharov, paragraph 93, 97, 159, 163, 169, and 264, is met by automatic data analysis techniques, which may detect pre-determined "events" that can be tagged and/or time-stamped, where all tagged events may be recorded on a master timeline that represents the entire duration of the procedures);
detecting, at the healthcare location, an end of the imaging procedure based on an ending event; and in response to detecting the end of the imaging procedure (see: Grantcharov, paragraph 93, 97, 137, 159, 163, 223, 253, and 264, is met by start point of recording session and the generation of and tagging with time-stamps to create a timeline or clock for the data streams, activatable by a button touch, and screen interface to control one or more components of the system such as start or stop recording),
Grantcharov teaches log files and tracking (see: Grantcharov, paragraph 37, 84, 88, 137, 219, 223, and 349-350), but Grantcharov fails to specifically teach the following limitations met by Oliveira as cited:
incrementing a procedure count associated with the healthcare location based on characteristics of the image procedure (see: Oliveira, paragraph 32-33, is met by using DICOM protocol to capture operational data including equipment used, a number of procedures performed, a location of the procedure, a type of procedure, and capture clinical data including ultrasound imaging examinations).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the analysis as taught by Grantcharov to include using DICOM protocol to capture operational data including equipment used, a number of procedures performed, a location of the procedure, a type of procedure, and capture clinical data including ultrasound imaging examinations, as taught by Oliveira, with the motivation of track wait times of scheduled patients (see: Oliveira, paragraph 32).
As per claim 16, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
wherein detecting the beginning of the imaging procedure comprises detecting selection of a procedure beginning marker (see: Grantcharov, paragraph 93, 97, 130, 137, 150, 159, 163, 223, and 264, is met by start point of recording session and the generation of and tagging with time-stamps to create a timeline or clock for the data streams) at an imaging device used in the imaging procedure (see: Grantcharov, paragraph 47-48, 56-57, 91, 101, 129-130, 141-143, and 190, is met by lossless compression and securely encrypting a transport file of compressed and encrypted "real-time data streams" including video of a procedural video capture device such as an endoscopic camera, x-ray, and MRI).
As per claim 17, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
identifying the healthcare location from a plurality of healthcare locations based on an identifier of local compute resources collocated with an imaging device used in the imaging procedure (see: Grantcharov, paragraph 47-48, 56-57, 91, 93, 97, 101, 129-130, 139, 141-143, 159, 163, 190, 202, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams, the encoder locally and directly connected to video capture devices such as an endoscopic camera, x-ray, and MRI), wherein the identifier is associated with the healthcare location (see: Grantcharov, paragraph 234, is met by facility variables such as Department, Operating Room, and so on).
As per claim 20, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
wherein analyzing the image stream produced during the imaging procedure comprises analyzing a compressed image stream (see: Grantcharov, paragraph 93, 97, 159, 163, 169, and 264, is met by automatic data analysis techniques, which may detect pre-determined "events" that can be tagged and/or time-stamped, where all tagged events may be recorded on a master timeline that represents the entire duration of the procedures).
As per claim 21, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
wherein the unique identifiers are generated at one of the healthcare location or at a cloud location (see: Grantcharov, paragraph 93, 97, 106, 159, 163, 199, 256-257, and 264, is met by the distributed computing resources such as cloud computing, generation of and tagging with time-stamps to create a timeline or clock for the data streams).
As per claim 22, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
wherein the ending event comprises an ending button press, a removal of a probe, or a passage of a duration of time (see: Grantcharov, paragraph 93, 97, 137, 159, 163, 220, 223, 253-254, and 264, is met by the generation of and tagging with time-stamps to create a timeline or clock for the data streams, stop is activatable by a button touch, end of the recording, and screen interface to control one or more components of the system such as start or stop recording).
Claim(s) 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2018/122506 to Grantcharov in view of U.S. Patent Application Publication 2023/0274821 to Oliveira further in view of U.S. Patent Application Publication 2020/0411173 to Mansi.
As per claim 18, Grantcharov and Oliveira teach the invention as claimed, see discussion of claim 15, and further teach:
determining, prior to analyzing the image stream, that the image stream is produced from a valid imaging procedure (see: Mansi, paragraph 86, 90, and 111-112, is met by function to pre-process any or all of the data packet, verify that the image data is relevant and eliminate an irrelevant (such as incorrectly labeled or irrelevant anatomical region or information not corresponding to regions of interest/potential user conditions) or unsuitable data packet and scans from further analysis).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the analysis as taught by Grantcharov and Oliveira to include verifying that the image data is relevant and eliminate an irrelevant or unsuitable data packet and scans from further analysis as taught by Mansi with the motivation of saving time and/or resources by eliminating irrelevant scans (see: Mansi, paragraph 111).
As per claim 19, Grantcharov, Oliveira, and Mansi teach the invention as claimed, see discussion of claim 18, and further teach:
wherein determining that the image stream is produced from the valid imaging procedure comprises detecting that images of the image stream are collected while an imaging device used in the imaging procedure is inside of a patient's body (see: Mansi, paragraph 86, 90, and 111-112, is met by function to pre-process any or all of the data packet, verify that the image data is relevant and eliminate an irrelevant (such as incorrectly labeled or irrelevant anatomical region or information not corresponding to regions of interest/potential user conditions) or unsuitable data packet and scans from further analysis).
It would have been obvious to one of ordinary skill the in the art at the time the invention was filed to modify the analysis as taught by Grantcharov, Oliveira, and Mansi to include verifying that the image data is relevant and eliminate an irrelevant (such as incorrectly labeled or irrelevant anatomical region or information not corresponding to regions of interest/potential user conditions) or unsuitable data packet and scans from further analysis as taught by Mansi with the motivation of saving time and/or resources by eliminating irrelevant scans (see: Mansi, paragraph 111).
Response to Arguments
Applicant’s arguments from the response filed on 12/17/2025 have been fully considered and will be addressed below in the order in which they appeared.
In the remarks, Applicant argues in substance that (1) the 35 U.S.C. 101 rejections should be withdrawn in view of the amendments because “the elements recited in amended claims 1, 8 and 15 indeed amount to significantly more than the judicial exception. For example, independent claim 1 has been amended to recite, in part, "annotating by a processor the compressed images of the compressed image data," "streaming in real time, to the healthcare location, the computer annotations," and "displaying, on the display, the computer annotations over a local image of the biological structural feature of the patient captured by the imaging system, wherein the local image is separate from the compressed image data." The annotating by the processor, streaming in real time, and displaying of the computer annotations of amended claim 1 amounts to significantly more than a judicial exception as compressed images are annotated by a processor and the computer annotations are displayed over a local image on a display…Independent claim 15 has been amended to recite, in part, "detecting, at a healthcare location, a beginning of an imaging procedure based on a button activation at an image processing system," and "detecting, at the healthcare location, an end of the imaging procedure based on an ending event." The detecting, at the healthcare location, a beginning of an imaging procedure based on a button activation at an image processing system and detecting, at the healthcare location, an end of the imaging procedure based on an ending event of amended claim 15 amounts to significantly more than a judicial exception as a beginning of an imaging procedure is detected based on a button activation and an ending of the imaging procedure is based on an ending event.”
The rejections with regard to claims 1-2 and 4-14 are withdrawn in view of the amendments which provide further additional elements which when considered in ordered combination with the previously claimed additional elements, provide significantly more. However, with respect to claims 15-22, the Examiner respectfully disagrees and Applicant’s arguments are not persuasive.
With regard to claim 15, the argued activation of a button is broadly claimed does not perform anything significantly technical. The “button activation at the image processing system” is configured though no more than a statement than that a function is begun “based on” said activation such that it amounts to no more than a mere instruction to apply the exception using generic computer elements. And similarly, the claimed resources and devices being configured to performed their functions though no more than a statement than that they are being “used” or “located” at various locations. Hence, the claims here are not directed to a specific improvement to computer functionality that amount to a practical application. Rather, they are directed to the use of conventional or generic technology in a well-known environment, without any claim that the invention reflects an inventive solution to a technical problem presented by combining the two. In the present case, the claims fail to recite any elements that individually or as an ordered combination transform the identified abstract idea(s) in the rejection into a patent-eligible application of that idea.
In the remarks, Applicant argues in substance that (2) the 35 U.S.C. 103 rejections should be withdrawn in view of the amendments because, “Independent claim 1 has been amended to recite, in part, "unencrypting the compressed image data of a biological structural feature of a patient" and "identifying a characteristic of interest in the compressed image data, wherein the characteristic of interest is the biological structural feature of the patient in the compressed images corresponding to the compressed image data." Support can be found throughout the current specification including in, for example, paragraphs [0030] and [0044] of the current specification. The cited references Grantcharov in view of Mansi do not teach or suggest these claim features. For example, on page 10 of the Office Action, the Examiner states that "Grantcharov fails to specifically teach the following limitation met by Mansi" and cites portions of Mansi. However, Mansi does not overcome this deficiency found in Grantcharov as Mansi does not teach or suggest "wherein the characteristic of interest is the biological structural feature of the patient in the compressed images corresponding to the compressed image data" as recited in amended claim 1. Mansi in paragraph [0103] discloses that "S220 preferably includes identifying (e.g., locate, measure, quantify, etc.) and segmenting an affected brain region." Further, paragraph [0106] of Mansi discloses "a segmentation region (e.g., initial segmentation region, final segmentation region, etc.) is then formed." The regions disclosed in Mansi cannot be equated to the "biological structural feature of the patient" recited in amended claim 1. Further, as illustrated in FIG. 4 and FIG. 8 of Mansi, regions are identified, where as amended claim 1 recites "wherein the characteristic of interest is the biological structural feature of the patient in the compressed images corresponding to the compressed image data." Additionally, claim 1 has been amended with the subject matter of claim 3 and…”
The Examiner respectfully disagrees. Applicant’s arguments are not persuasive.
The broadly claimed unencrypting of the compressed image data is taught by Grantcharov in the decryption of the data streams and the embedding of encrypt/decrypt keys (see: Grantcharov, paragraph 129, 142, 147, 151, 154, and 213). That the claims specify that the data unencrypted is “of a biological structural feature of a patient” is inconsequential to the unencrypting and is necessarily the case as the later identifying step identifies a characteristic of interest that is the biological structural feature of the patient. Mansi teaches identifying to locate, isolate, measure, quantify, and segment a region of one or more images (see: Mansi, paragraph 100-103 and 150); hence, Mansi directly teaches characteristics of structural features of images. Grantcharov teaches annotations intelligent dashboard interface for annotations (see: Grantcharov, paragraph 94). As combined in the rejection, these teachings of the prior art meet the broad claim limitations in question.
It is argued that the regions disclosed in Mansi cannot be equated to the "biological structural feature of the patient" recited in amended claim 1, but this is not persuasive as it is also argued that Mansi in paragraph [0103] discloses that "S220 preferably includes identifying (e.g., locate, measure, quantify, etc.) and segmenting an affected brain region", and further, paragraph [0106] of Mansi discloses "a segmentation region (e.g., initial segmentation region, final segmentation region, etc.) is then formed." A brain region meets the broadly claimed biological structural feature of the patient for the reasons argued to the contrary.
As per the further argued amendments, Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument – see application of prior art Guo.
In the remarks, Applicant argues in substance that (3) the 35 U.S.C. 103 rejections should be withdrawn in view of the amendments because “Independent claim 15 has been amended to, in part, recite "detecting, at a healthcare location, a beginning of an imaging procedure based on a button activation at the image processing system," "detecting, at the healthcare location, an end of the imaging procedure based on an ending event," and "in response to detecting the end of the imaging procedure, incrementing a procedure count associated with the healthcare location based on characteristics of the imaging procedure." Support can be found throughout the current specification including in, for example, paragraph [0065] and [0065] of the current specification. The cited references, Grantcharov in view of Oliveira do not teach or suggest these claim features. For example, on page 19 of the Office Action, the Examiner acknowledges that Grantcharov does not teach the previously unamended feature and cites portions of Oliveira. However, Oliveira merely discloses in paragraph [0033] that "the operational data can include, for example, staff experience, equipment used, use of contrast agent, [and] a number of procedures performed." Oliveira does not discuss "detecting, at the healthcare location, an end of the imaging procedure based on an ending event," and "in response to detecting the end of the imaging procedure, incrementing a procedure count associated with the healthcare location based on characteristics of the imaging procedure." as recited in amended claim 15. Additionally, Oliveira is silent as to "detecting, at a healthcare location, a beginning of an imaging procedure based on a button activation at the image processing system," as recited in amended claim 15. Claim 22 has been introduced depending from amended claim 15 and reciting "the method of claim 15, wherein the ending event comprises an ending button press, a removal of a probe, or a passage of a duration of time." Specific support for claim 22 may be found in, for example, paragraph [0066] of the current specification.”
The Examiner respectfully disagrees. Applicant’s arguments are not persuasive.
The claims are broadly to detecting a beginning an imaging procedure based on the activation of a button at an image processing system. Grantcharov meets such broad limitations by teaching a start point of recording session and the generation of and tagging with time-stamps to create a timeline or clock for the data streams, activatable by a button touch, and a screen interface to control one or more components of the system such as start or stop recording. The claims are further to a broad ending event which also may be a button press, and Grantcharov meets such broad limitations by teaching the generation of and tagging with time-stamps to create a timeline or clock for the data streams, where a stop is activatable by a button touch, an end of the recording, and a screen interface to control one or more components of the system such as start or stop recording. (see: Grantcharov, paragraph 93, 97, 137, 159, 163, 220, 223, 253-254, and 264). Therefore, the prior art reasonably teaching the broadly claimed button pressing to indicate the beginning and ending of imaging as claimed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT A SOREY whose telephone number is (571)270-3606. The examiner can normally be reached Monday through Friday, 8am to 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long can be reached on (571) 270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT A SOREY/Primary Examiner, Art Unit 3682