Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2/28/24 and 7/24/25 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claim 11 is objected to because of the following informalities:
Recites “generate, using a deep convolutional neural network, a pixel-wise binary mark for each image of the subset of images” should be changed to “binary mask” as recited in the Specification.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8, 9, 13-22 and 24-37 are rejected under 35 U.S.C. 103 as being unpatentable over Peleg, United States Patent Publication 20210137368, in view of Jerebko et al., United States Patent Publication 20090092300 (hereinafter “Jerebko”).
Claim 1:
Peleg discloses:
A system for processing images for features of interest, comprising:
one or more memory devices storing processor-executable instructions (see paragraphs [0045] and [0050]). Peleg teaches memory and instructions; and
one or more processors configured to execute the instructions to cause the system to perform operations to detect at least one feature of interest in images captured with an imaging device (see paragraphs [0008], [0009] and [0065]). Giger teaches processors performing operations to detect features of interest in images, the operations comprising:
receiving an ordered set of images from the captured images, the ordered set of images being temporally ordered (see paragraph [0065]). Peleg teaches receiving a temporally ordered set of images;
analyzing one or more subsets of the ordered set of images individually using a local spatio-temporal processing module, the local spatio-temporal processing module being configured to determine a presence of characteristics related to at least one feature of interest in each image of each subset of images (see paragraph [0048]). Peleg teaches a pathology detector may analyze various parameters and/or features in or associated with an image (e.g., intensity level of pixels, color tones, amplifier gain, light exposure time, etc.), and identify features representative or characteristic to the pathology type involved;
the global spatio-temporal processing module being configured to refine the determined characteristics associated with each subset of images (see paragraph [0048]). Peleg teaches remotely calculating score and modifying and further defining characteristics which would also modify the score.
calculating a numerical value for each image using a timeseries analysis module, the numerical value being representative of the presence of at least one feature of interest and calculated using the refined characteristics associated each subset of images and spatio-temporal information (see paragraphs [0048] and [0082]). Peleg teaches identify features representative or characteristic to the pathology type involved, and, using these features, a classifier, which may be part of the pathology detector, may output a value (e.g., score) indicating the probability that the image includes the pathology.; and
generating a report on the at least one feature of interest using the numerical value associated with each image of each subset of the ordered set of images (see paragraph [0135]). Peleg teaches generating a report including images that have the feature of interest in the images.
Peleg fails to expressly disclose annotating image with a vector.
Jerebko discloses:
analyzing one or more subsets of the ordered set of images individually using a local spatio-temporal processing module, the local spatio-temporal processing module being configured to determine a presence of characteristics related to at least one feature of interest in each image of each subset of images and to annotate the subset images with a feature vector based on the determined characteristics in each image of each subset of images (see paragraph [0016]). Jerebko teaches determining the presence of the feature of interest and annotate the images with descriptive feature vectors;
processing a set of feature vectors of the ordered set of images using a global spatio-temporal processing module, wherein each feature vector of the set of feature vectors includes information about each determined characteristic of the at least one feature of interest (see paragraph [0037]). Jerebko teaches processing the feature vectors that include descriptive information about the feature of interest;
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg to include determining feature vectors of the features of interest in the images for the purpose of effectively identifying features of interests in a plurality of images, as taught by Jerebko.
Claim 2:
Peleg discloses:
wherein the one or more processors are further configured to determine a likelihood of characteristics related to at least one feature of interest in each image of the subset of images (see paragraph [0011]). Peleg teaches predicting the probability that each image will have the feature of interest.
Claim 3:
Peleg discloses:
wherein the determined likelihood of characteristics in each image of the subset of images includes a float value between 0 and 1 (see paragraphs [0011] and [0012]). Peleg teaches predicting the probability that each image will have the feature of interest. The probability is able based on calculated scores.
Claim 8:
Peleg discloses:
wherein to analyze the ordered set of images using the local spatio-temporal processing module to determine presence of characteristics the one or more processors are further configured to:
determine a vector of quality scores, wherein each quality score in the vector of quality scores corresponds to each image of the subset of the images (see paragraph [0082]). Peleg teaches determining a score vector based on the images having the presence of the feature.
Claim 9:
Peleg discloses:
wherein each quality score is an ordinal number between 0 and R, wherein a score 0 represents minimum quality and a score R represents a maximum quality (see paragraph [0082]). Peleg teaches determining a score vector based on the images having the presence of the feature. The scores being assigned an ordinal number.
Claim 13:
Peleg discloses:
wherein the numerical value associated with each image is interpretable to determine a probability to identify the at least one feature of interest within the image (see paragraph [0096]). Peleg teaches a score, a numerical value, that shows the probability of the feature of interest being present.
Claim 14:
Peleg discloses:
wherein the one or more processors are further configured to:
output a first numerical value for an image where the at least one feature of interest is not detected; and output a second numerical value for an image where the at least one feature of interest is detected (see paragraph [0073]). Peleg teaches outing values based on the presence of the feature of interest.
Claim 15:
Peleg discloses:
wherein a size of the subset of images is configurable by a user of the system (see paragraph [0038]). Peleg teaches enable the user (e.g., the physician) to select any number of important images the user may want to review (e.g., 20 images, or 50 images, etc.),
Claim 16:
Peleg discloses:
wherein a size of the subset of images is dynamically determined based on a requested feature of interest (see paragraph [0036]). Peleg teaches the size of the subset of images is determined based on the request feature of interest.
Claim 17:
Peleg discloses:
wherein a size of the subset of images is dynamically determined based on the determined characteristics (see paragraph [0036]). Peleg teaches the size of the subset can be based on a particular characteristic.
Claim 18:
Peleg discloses:
wherein the one or more subsets of images include shared images (see paragraph [0139]). Peleg teaches the subset including remotes images to be shared locally with the captured images.
Claim 19:
Peleg discloses:
wherein the ordered set of images are received directly from the imaging device during a medical procedure (see paragraph [0034]). Peleg teaches in-vivo imaging device may capture a series of images as it traverses the GI system, and transmit images, typically by transmitting one image frame at a time.
Claim 20:
Peleg discloses:
wherein the presences of at least one feature of interest are determined from a portion of the captured images (see paragraph [0035]). Peleg teaches Detection of an anomaly in a tissue imaged by an imaging device may include measuring one or more pixel parameters (e.g., color parameter) of one or more pixels in an image. An anomaly detected for display may be any anomaly, e.g., a polyp, a lesion, a tumor, an ulcer, a blood spot, an angiodysplasia, a cyst, a choristoma, a hamartoma, a tissue malformation, a nodule, to list some anomalies.
Claim 21:
Peleg discloses:
wherein the generated report on the at least one feature of interest is generated from the captured images during or right after a medical procedure (see paragraphs [0034]-[0036]). Peleg teaches the images may be later compiled at or by a receiver to produce a displayable video clip or an image stream, or a series of images to show the feature of interest in the images.
Claim 22:
Peleg discloses:
a selected image may be further processed, where the processing, or further processing, of the selected image may include analysis of the selected image, for example to further investigate the patholgy with more robust software tools (see paragraph [0036]). Peleg teaches a selected image may be further processed, where the processing, or further processing, of the selected image may include analysis of the selected image, for example to further investigate the patholgy with more robust software tools.
Claims 24, 37:
Although Claim 24 is a non-transitory computer readable medium claim and Claim 37 is a method claim, they are interpreted and rejected for the same reasons as the system of Claim 1.
Claim 25:
Peleg discloses:
wherein the one or more processors are configured to:
access the temporally ordered set of images from the captured images (see paragraph [0077]). Peleg teaches accessing temporally ordered captured images;
detect, using an event detector, an occurrence of an event in the temporally ordered set of images, wherein a start time and an end time of the event are identified by a start image frame and an end image frame in the temporally ordered set of images (see paragraphs [0035] and [0082]). Peleg teaches detecting an occurrence of the features that can be identified by the number of the picture or times in the temporal order;
select, using a frame selector, an image from a group of images in the temporally ordered set of images, bounded by the start image frame and the end image frame, based on an associated score and a quality score of the image, wherein the associated score of the selected image indicates a presence of at least one feature of interest (see paragraph [0082]). Peleg teaches selecting an image from the group based on the scores and the time/distance of the temporally ordered images that indicates the presence of the feature;
merge a subset of images from the selected images based on a matching presence of the at least one feature of interest using an objects descriptor, wherein the subset of images is identified based on spatial and temporal coherence using spatio- temporal information (see paragraph [0082]). Peleg teaches merging images based on the having scores that have the presence of the feature and the images are not right next to each other; and
split the temporally ordered set of images using temporal segmentor in temporal intervals which satisfy the temporal coherence of a selected task (see paragraph [0082]). Peleg teaches also splitting subsets of temporal ordered images that satisfy the task based on the presence of the feature of interest.
Claim 26:
Peleg discloses:
wherein to split the temporally ordered set of images in temporal intervals, the one or more processors are further configured to:
identify a subset of temporally ordered set of images with the presence of the at least one feature of interest; or identify a subset of temporally ordered set of images with the presence of an event (see paragraph [0004]). Peleg teaches identifying a subset of ordered image with the present of the feature of interest.
Claim 27:
Peleg discloses:
wherein to identify a subset of temporally ordered set of images with the presence of the at least one feature of interest, the one or more processors are further configured to: add color to a portion of a timeline of the captured images that matches the subset of the temporally ordered set of images (see paragraph [0046]). Peleg teaches identifying a subset of ordered images with the presence of the feature of interest by enhancing the image of the portion of images that have the feature.
Claim 28:
Peleg discloses:
wherein the color differs with different features of interest related to the at least one feature of interest, and/or wherein the color differs with different events detected using the event detector, and/or wherein the timeline is presented as part of a video summary, wherein the video summary includes overlaid text and graphics, wherein the video summary may be generated by selecting relevant frames from the captured images and has a variable frame rate video output has a variable frame rate video output (see paragraphs [0008] and [0046]). Peleg teaches enhancing the images based on the identified feature of interest in the video summary of images selected from all the images that include a feature of interest.
Claim 30:
Peleg discloses:
wherein the one or more processors are further configured to:
generate a dashboard with summary of the temporally ordered set of images, wherein the summary includes images selected using a frame selector module and augmented with display markings, wherein the generated dashboard includes quality scores of a medical procedure performed while images are captured using the imaging device, quality scores of an operator of the imaging device performing a medical procedure, and aggregated information from one or more of the event detector, frame selector, object descriptor, and temporal segmentor (see paragraph [0046]). Peleg teaches process the image data in order to automatically display, for example on display device, images (and/or a video clip including a series of images) that were captured by device, and may enable a user (a physician) to, for example, select images for display, or to enhance displayed images that may have the feature of interest.
Claim 31:
Peleg discloses:
wherein the one or more processors are further configured to:
receive a plurality of tasks, wherein at least one task of the plurality of tasks is associated with a request to identify at least one feature of interest in the set of images (see paragraph [0034]-[0036]). Peleg teaches receiving a plurality of tasks to perform the identifying of the feature of interest;
analyze, using a local spatio-temporal processing module, a subset of images of the set of images to identify presence of characteristics associated with the at least one feature of interest (see paragraph [0034]-[0036]). Peleg teaches analyzing the images to identify the presence of a feature of interest; and
iterate execution of a timeseries analysis module for each task of the plurality of tasks to associate a numerical score for each task with each image of the subset of images (see paragraph [0034]-[0036]). Peleg teaches identifying and score the presence of the feature of interest with each image.
Claim 32:
Peleg discloses:
wherein the local spatio-temporal processing module outputs subsets of analyzed images of the set of images, wherein each subset is associated with a task of the plurality of tasks (see paragraph [0036]). Peleg teaches outputting the subsets of images for further processing, further analyzing or displaying the subsets of images.
Claim 33:
Peleg discloses:
wherein the local spatio-temporal processing module determines the presence of characteristics by determining a vector of quality scores, wherein each quality score in the vector of quality scores corresponds to each image of the subset of the images (see paragraph [0082]). Peleg teaches determining a score vector based on the images having the presence of the feature.
Claim 34:
Peleg discloses:
features of interest related to the plurality of tasks (see paragraph [0036]). Peleg teaches generating determining features of interest related to a task of identifying features.
Peleg and Jerebko fail to expressly disclose feature vectors.
Jerebko discloses:
wherein the local spatio-temporal processing module generates a set of feature vectors for features of interest related to the plurality of tasks (see paragraph [0037]). Jerebko teaches processing the feature vectors that include descriptive information about the feature of interest related to a task;
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg to include determining feature vectors of the features of interest in the images for the purpose of effectively identifying features of interests in a plurality of images, as taught by Jerebko.
Claim 36:
Peleg discloses:
wherein the operations further comprise:
aggregate output of the local spatio-temporal processing module for each task of the plurality of tasks using the timeseries analysis module (see paragraph [0046]). Peleg teaches process the image data in order to automatically display, for example on display device, images (and/or a video clip including a series of images) that were captured by device, and may enable a user (a physician) to, for example, select images for display, or to enhance displayed images that may have the feature of interest.
Claim 35:
Peleg discloses:
wherein the operations further comprise:
analyze, using a global spatio-temporal processing module, sets of feature vectors for the subset of images analyzed by the local spatio-temporal processing module (see paragraph [0048]). Peleg teaches remotely calculating score and modifying and further defining characteristics which would also modify the score.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Peleg, in view of Jerebko, in further view of Kaiser et al., United States Patent Publication 20200066393 (hereinafter “Kaiser”).
Claim 4:
Peleg and Jerebko fail to expressly disclose a convolution neural network to encode the image and aggregate determined characteristics.
Kaiser discloses:
wherein the one or more processors are further configured to perform operations to determine a likelihood of characteristics in each image of the subset of images:
encode each image of the subset of the images (see paragraph [0089]). Kaiser teaches encoding the images; and
aggregate the spatio-temporal information of the determined characteristics using a recurrent neural network (see paragraph [0089]). Kaiser teaches aggregate the characteristics of the temporally ordered images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg and Jerebko to include aggregating characteristics of features using a neural network for the purpose of efficiently extracts a sequence of temporal image features from the sequence of temporally successive tomographic perfusion imaging data sets, as taught by Kaiser.
Claims 7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Peleg, in view of Jerebko, in further view of Ohara et al., United States Patent Publication 20080031598 (hereinafter “Ohara”).
Claim 7:
Peleg and Jerebko fail to expressly disclose signal processing on the image to improve quality.
Ohara discloses:
wherein to refine the likelihood of the characteristics the one or more processors are further configured to:
apply one or more signal processing techniques (see paragraph [0006]). Ohara teaches applying signal processing to the images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg and Jerebko to include signal processing on the images for the purpose of efficiently processing high quality images and a large quantity of images, as taught by Ohara.
Claim 10:
Peleg and Jerebko fail to expressly disclose signal processing on the image to improve quality.
Ohara discloses:
wherein to process the ordered set of images using the global spatio-temporal processing module, the one or more processors are further configured to:
refine quality scores of each image of the subset of images of the one or more subsets of the ordered set of images using signal processing techniques (see paragraph [0006]). Ohara teaches applying signal processing to the images to correct the quality of images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg and Jerebko to include signal processing on the images for the purpose of efficiently processing high quality images and a large quantity of images, as taught by Ohara.
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Peleg, in view of Jerebko, in further view of Deever et al., United States Patent Publication 20080165280 (hereinafter “Deever”).
Claim 5:
Peleg and Jerebko fail to expressly disclose using a temporal neural network to filter images.
Deever discloses:
wherein the one or more processors are further configured to:
encode each image of the subset of the images (see paragraphs [0045], [0068] and [0078]). Deever teaches encoding the images; and
aggregate the spatio-temporal information of the determined characteristics using a temporal convolution network. (see paragraphs [0045], [0068] and [0078]). Deever teaches encoding the images and filtering the images using a temporal filtering algorithm to filter the characteristics in the images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Peleg and Jerebko to include aggregate the information using a temporal filtering algorithm for the purpose of generating better images and videos using a temporal algorithm, as taught by Deever.
Claim 6:
Peleg and Jerebko fail to expressly disclose using a temporal neural network to filter images.
Deever discloses:
wherein the one or more processors are further configured to:
refine a likelihood of the characteristics in each image of the subset of images by applying a non-causal temporal convolution network (see paragraphs [0045], [0068] and [0078]). Deever teaches applying a temporal filtering algorithm to filter the characteristics in the images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Peleg and Jerebko to include aggregate the information using a temporal filtering algorithm for the purpose of generating better images and videos using a temporal algorithm, as taught by Deever.
Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Peleg, in view of Jerebko, in further view of Maison et al., United States Patent Publication 20090244309 (hereinafter “Maison”).
Claim 11:
Peleg and Jerebko fail to expressly disclose generating a mask for the images.
Maison discloses:
wherein to analyze the one or more subsets of the ordered set of images using the local spatio-temporal processing module to determine the presence of characteristics, the one or more processors are further configured to:
generate, using a deep convolutional neural network, a pixel-wise binary mark for each image of the subset of images (see paragraphs [0158] and [0167]). Maison teaches using an algorithm to generate a binary mask for the images.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Peleg and Jerebko to include generating binary masks for the purpose of efficiently extracting objects from images with backgrounds and foregrounds, as taught by Maison.
Claim 12:
Peleg and Jerebko fail to expressly disclose generating a mask for the images.
Maison discloses:
wherein to process the one or more subsets of the ordered set of images using the global spatio-temporal processing module, the one or more processors are further configured to:
refine a binary mask for image segmentation using morphological operations exploiting prior information about a shape and distribution of the determined characteristics (see paragraphs [0158] and [0167]). Maison teaches during segmentation, changing the mask so that the shape and structure is maintained.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Peleg and Jerebko to include generating binary masks for the purpose of efficiently extracting objects from images with backgrounds and foregrounds, as taught by Maison.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Peleg, in view of Jerebko, in further view of Wolf et al., United States Patent Publication 20200273581 (hereinafter “Wolf”).
Claim 23:
Peleg and Jerebko fail to expressly disclose recommending an action based on medical guidelines.
Wolf discloses:
wherein the generated report of at least one feature of interest includes at least one of a recommended action based on a medical guideline, a recommended action of a set of recommended actions based on medical guidelines, or another action performed during a procedure (see paragraphs [1045]-[1058]). Wolf teaches recommending an action based on medical findings during the procedures.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention by Peleg and Jerebko to include recommending an action based on medical findings during the procedure for the purpose of efficiently performing procedure and accurate predictions during procedures, as taught by Wolf.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIONNA M BURKE/Examiner, Art Unit 2178 12/12/25