Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/07/2025 has been entered.
Response to Amendment
This communication is filed in response to the action filed on 11/07/2025.
Claims 1-4, 10-12, and 14-20 are currently amended. Claim 9 is canceled. Claims 1-8, and 10-20 are pending.
Response to Arguments
Applicant’s arguments and amendments filed on 11/07/2025 on page 9, under REMARKS with respect to 35 U.S.C. 112 claim rejections to claims 1-20 have been fully considered and are persuasive. The rejections to the claims have been withdrawn.
Applicant’s arguments filed on 11/07/2025 on pages 9-11, under REMARKS with respect to 35 U.S.C. 102 claim rejections to claims 1-20 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of U.S.C. 103 using US 2010/0034441 A1.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 08/01/2025, and 11/07/2025 have been considered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1-8, and 10-20 are rejected under 35 § U.S.C. 103 as being obvious over US 2023/0298180 A1 to KWEON et al. (hereinafter “KWEON”) in view of US 2010/0034441 A1 to MAKRAM-EBEID et al. (hereinafter “MAKRAM”).
As per claim 1, KWEON discloses a method of generating a representative frame image of a medical image (a computing system and method for blood vessel imaging and segmentations of a medical image; figs 1-3; paragraphs [0033-0035], [0037], [0039-0042]), performed by at least one processor (the system comprises a computing processor; fig 10; paragraphs [0085-0086]), the method comprising: acquiring a medical image depicting an image of a blood vessel (the method performed includes steps of inputting an image of a target blood vessel as a angiography images 210 of a desired anatomical region; figs 1-3; paragraphs [0033-0035], [0037], [0039-0042]); determining a score for each of all frame images contained in the medical image by, for each frame image of the frame images (a score is produced by a machine learning model, the score is initiative of a likelihood/probability of each pixel in a plurality of pixels in a blood vessel image match the target blood vessel and can be used to determine in the output data if a pixel is greater than a threshold score related to the target blood vessel; paragraph [0037]): masking, in the frame image, a region of the frame image corresponding to the blood vessel (for each input image a blood vessel image segmenting apparatus is adapted to generate a plurality of candidate mask images relating to a target blood vessel by applying a plurality of blood vessel segmentation models to any blood vessel images provided; paragraphs [0039-0044]). KWEON fails to disclose and determining, based on a quantity of pixels in the masked region, the score for the frame image, wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image, and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image; and generating a representative frame image of the medical image using all of the frame images, wherein the generating the representative frame image of the medical image comprises: normalizing the score for each frame image contained in the medical image within a predetermined range of values and determining a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times; and generating the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image.
MAKRAM discloses and determining, based on a quantity of pixels in the masked region, the score for the frame image (determining within the masked region of interest a contrast value, acting substantially as a score related to brightness value (intensity) of the pixels of contrast agent determined within the region of interest that pass the mask having a low band pass filter; paragraphs [0018], [0067-0070], [0074], [0077]), wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image (the contrast value is determined based on the intensity and value of the pixels representing contrast agent injected into the veins and blood stream of the patient; paragraphs [0018], [0067-0070], [0074], [0077]), and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image (the intensity value of the contrast agent is determined based on a region of interested which is masked using the low pass filter in order to filter out pixels not having a contrast agent in the vein and bloodstream of a subject, the contrast value is based on an intensity of the pixels containing the agent within the region of interest; paragraphs [0018], [0067-0070], [0074], [0077]); and generating a representative frame image of the medical image using all of the frame images, wherein the generating the representative frame image of the medical image comprises (the system is further adapted to generate representative image frames of the region of interest only displaying the pixels and veins/anatomy comprising the contrast agent in the medical image; fig 3; paragraphs [0025], [0053-0055], [0074-0077]): normalizing the score for each frame image contained in the medical image within a predetermined range of values and determining a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times (the contrast values acting as the scores are normalized using weighted normalized sums of the contrast values derived using provided equations; paragraphs [0073-0077]); and generating the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image (the representative images are generated of the subject anatomy comprising the contrast agent and in this case would be the veins injected with said agent; fig 3 and 6; paragraphs [0053-0055], [0067], [0074-0077]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify KWEON to have normalized score/values representing the pixels of the region of interest contrast agent is present of MAKRAM reference. The Suggestion/motivation for doing so would have been to provide space regularization of complex images such as the cardiovascular veins of the subject as suggested by MAKRAM at paragraph [0070]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine MAKRAM with KWEON to obtain the invention as specified in claim 1.
As per claim 2, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the determining the score for each of the plurality of all frame images comprises: acquiring a score vector composed of n score elements from n frame images through a machine learning model (acquiring an error score value which is dependent upon error evaluation criterion which are weighted and selected by the user in order to evaluate error scores for N frame’s and add them together; paragraphs [0037-0040], [0081]), wherein the medical image comprises the n frame images, and n is a positive integer greater than one (wherein the candidate masked images are greater in number than a single image and are n = greater than one; paragraphs [0080-0083]).
As per claim 3, KWEON in view of MAKRAM discloses the method of claim 2. Modified KWEON further discloses wherein the determining the score for each of all frame images comprises: determining the n score elements each corresponding to one of the n frame images based on at least one of: a similarity between each of the n frame images and other frame images, or an image quality of each of the n frame images (the error evaluation criterion are used to generate each candidate mask and the criterion and related weights can be adjusted and include which pixels having similar information/criterion values allowing the system to compare the image to the desired target vessel values for the masked image frames; paragraphs [0037], [0048], [0080-0083]).
As per claim 4, KWEON in view of MAKRAM discloses the method of claim 2. Modified KWEON further discloses wherein the determining the score for each of all frame images comprises: identifying a region corresponding to the blood vessel in each of the n frame images (the system is adapted to via the machine learning model identify and segment a target blood vessel from a blood vessel image wherein the blood vessel is part of an anatomical region such as left main coronary artery LM, left anterior descending artery LAD, left circumflex artery LCX, and right coronary artery RCA and for each area of blood vessels, a proximal area, a middle area, and a distal area; paragraphs [0037-0040]); identifying a region, within the region corresponding to the blood vessel, that satisfies a specified condition (wherein of the plurality of region a region is identified corresponding to the target blood vessel based on a score exceeding a threshold value; paragraphs [0037-0040]); and determining the n score elements each corresponding to one of the n frame images based on a size of the region that satisfies the specified condition (and determining the score based on N error score evaluation criterions of the N masked candidate images of the anatomical region containing the target blood vessel which satisfies the score threshold condition; paragraphs [0037-0040], [0056], [0080-0083]).
As per claim 5, KWEON in view of MAKRAM discloses the method of claim 4. Modified KWEON further discloses wherein the specified condition comprises: a condition in which an intensity of a color of pixels within the region that satisfies the specified condition is greater than or equal to a specified threshold (the system is adapted to evaluate an error level based on the brightness intensity of an area indicating a target blood vessel regarding the candidate mask image 800, the brightness intensity value may represent distribution concentration of a contrast medium injected to obtain a blood vessel image and when there is an area 853 having a difference in brightness intensity equal to or greater than a third threshold ratio from the trend line 852 in the area indicating the target blood vessel, a blood vessel image segmenting apparatus may determine the corresponding candidate mask image 800 as an error; paragraphs [0067-0068]).
As per claim 6, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the generating the representative frame image of the medical image comprises: acquiring a score vector composed of n score elements from n frame images through a machine learning model (multiple error scores are acquired for each of a plurality of candidate masked images and the error scores are generated from a machine learning model based on error evaluation criterions which receive user input weights and selection; paragraphs [0037-0040], [0080-0083]); and acquiring a weight vector consisting of n weight elements from the score vector using a specified formula (the user adds weights to the error evaluation criterion when calculating the error totals of the masked images; paragraphs [0080-0083]).
As per claim 7, KWEON in view of MAKRAM discloses the method of claim 6. Modified KWEON further discloses wherein the generating the representative frame image of the medical image comprises: applying corresponding weight elements among the n weight elements included in the weight vector to each of the n frame images (applying weights as desired by the user based on the masked images relating to the error evaluation criterion used to calculate the error of the masked images paragraphs [0037-0040], [0080-0083]); and merging the n frame images with the corresponding weight elements applied to generate the representative frame image (and adding (merging) the computed error scores to achieve a total error after the errors have been calculated and weighted using error evaluation criterion; paragraphs [0037-0040], [0080-0083]).
As per claim 8, KWEON in view of MAKRAM discloses the method of claim 7. Modified KWEON further discloses wherein the applying the corresponding weight elements to each of the n frame images comprises: applying the corresponding weight elements to each of pixels comprised in each of the n frame images (blood vessel image segmenting apparatus may also evaluate an error level of a candidate mask image by giving a different weight to each error evaluation criterion and adding error scores; paragraphs [0080-0083]).
As per claim 10, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the determining the score for each of all frame images further comprises: determining the score for each of all frame images based on a value measuring similarity between each of all frame images and other frame images (during operation 420 of the processing method blobs included in the candidate masked images comprise similar information and based on comparison to a threshold value pixel values can be determined to be indicating a target blood vessel; paragraphs [0037], [0048]).
As per claim 11, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the determining the score for each of all frame images further comprises: determining the score for each of all frame images based on a value measuring an image quality of each of all frame images (determine the error score based on error evaluation criterion in order to determine error levels of a candidate masked image and based on the error levels generating a set of candidate mask images 511, 513, 515 and would may use a candidate mask image having the highest evaluation among candidate mask images as a target blood vessel segmentation result paragraphs [0037], [0046-0052]).
As per claim 12, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the determining the score for each of all frame images further comprises: identifying a region corresponding to the blood vessel in each of all frame images (the system is adapted to via the machine learning model identify and segment a target blood vessel from a blood vessel image wherein the blood vessel is part of an anatomical region such as left main coronary artery LM, left anterior descending artery LAD, left circumflex artery LCX, and right coronary artery RCA and for each area of blood vessels, a proximal area, a middle area, and a distal area; paragraphs [0037-0040]); identifying a region within the region corresponding to the blood vessel that satisfies a specified condition (wherein of the plurality of region a region is identified corresponding to the target blood vessel based on a score exceeding a threshold value; paragraphs [0037-0040]); and determining the score for each of all frame images based on a size of the region that satisfies the specified condition (and determining the score based on N error score evaluation criterions of the N masked candidate images of the anatomical region containing the target blood vessel which satisfies the score threshold condition; paragraphs [0037-0040], [0056], [0080-0083]).
As per claim 13, KWEON in view of MAKRAM discloses the method of claim 1. Modified KWEON further discloses wherein the masking, in the frame image, the region of the frame image corresponding to the blood vessel is based on a color of pixels within the region (the system is adapted to evaluate an error level based on the brightness intensity (which is a color value) of an area indicating a target blood vessel regarding the candidate mask image 800, the brightness intensity value may represent distribution concentration of a contrast medium injected to obtain a blood vessel image and when there is an area 853 having a difference in brightness intensity equal to or greater than a third threshold ratio from the trend line 852 in the area indicating the target blood vessel, a blood vessel image segmenting apparatus may determine the corresponding candidate mask image 800 as an error; paragraphs [0067-0068]).
As per claim 14, KWEON discloses a non-transitory computer-readable recording medium storing computer-readable instructions (a computing system comprising a computer storage medium storing instructions related to a computing method; paragraph [0087]), wherein the instructions, when executed by at least one processor (the computer further comprising a computer processing component to execute the instructions relating to the method; fig 10; paragraphs [0085-0086]), cause an electronic device to: acquire a medical image depicting an image of a blood vessel (the method performed includes steps of inputting an image of a target blood vessel as a angiography images 210 of a desired anatomical region; figs 1-3; paragraphs [0033-0035], [0037], [0039-0042]); determine a score for each of all frame images contained in the medical image by, for each frame image of the frame images (a score is produced by a machine learning model, the score is initiative of a likelihood/probability of each pixel in a plurality of pixels in a blood vessel image match the target blood vessel and can be used to determine in the output data if a pixel is greater than a threshold score related to the target blood vessel; paragraph [0037]): masking, in the frame image, a region of the frame image corresponding to the blood vessel (for each input image a blood vessel image segmenting apparatus is adapted to generate a plurality of candidate mask images relating to a target blood vessel by applying a plurality of blood vessel segmentation models to any blood vessel images provided; paragraphs [0039-0044]). KWEON fails to disclose and determining, based on a quantity of pixels in the masked region, the score for the frame image, wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image, and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image; and generate a representative frame image of the medical image using all of the frame images by wherein the instructions, when executed by the at least one processor, cause the electronic device to generate the representative frame image of the medical image by causing the electronic device to: normalize the score for each frame image contained in the medical image within a predetermined range of values and determine a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times; and generate the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image.
MAKRAM discloses and determining, based on a quantity of pixels in the masked region, the score for the frame image (determining within the masked region of interest a contrast value, acting substantially as a score related to brightness value (intensity) of the pixels of contrast agent determined within the region of interest that pass the mask having a low band pass filter; paragraphs [0018], [0067-0070], [0074], [0077]), wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image (the contrast value is determined based on the intensity and value of the pixels representing contrast agent injected into the veins and blood stream of the patient; paragraphs [0018], [0067-0070], [0074], [0077]), and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image (the intensity value of the contrast agent is determined based on a region of interested which is masked using the low pass filter in order to filter out pixels not having a contrast agent in the vein and bloodstream of a subject, the contrast value is based on an intensity of the pixels containing the agent within the region of interest; paragraphs [0018], [0067-0070], [0074], [0077]); and generate a representative frame image of the medical image using all of the frame images by wherein the instructions, when executed by the at least one processor, cause the electronic device to generate the representative frame image of the medical image by causing the electronic device to (the system is further adapted to generate representative image frames of the region of interest only displaying the pixels and veins/anatomy comprising the contrast agent in the medical image; fig 3; paragraphs [0025], [0053-0055], [0074-0077]): normalize the score for each frame image contained in the medical image within a predetermined range of values and determine a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times (the contrast values acting as the scores are normalized using weighted normalized sums of the contrast values derived using provided exponential equations using the medical images input to the system comprising veins having contrast agent introduced to them; paragraphs [0073-0077]); and generate the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image (the representative images are generated of the subject anatomy comprising the contrast agent and in this case would be the veins injected with said agent; fig 3 and 6; paragraphs [0053-0055], [0067], [0074-0077]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify KWEON to have normalized score/values representing the pixels of the region of interest contrast agent is present of MAKRAM reference. The Suggestion/motivation for doing so would have been to provide space regularization of complex images such as the cardiovascular veins of the subject as suggested by MAKRAM at paragraph [0070]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine MAKRAM with KWEON to obtain the invention as specified in claim 14.
As per claim 15, KWEON discloses an electronic device comprising: at least one processor (a computing system and method for blood vessel imaging and segmentations of a medical image the computing system comprising a processor; figs 1-2, 10; paragraphs [0033], [0039-0042], [0085-0056]); and memory storing instructions that, when executed by the at least one processor (the computing system further comprising a computer storage medium storing instructions related to a computing method that are executed by the processor; paragraph [0087]), cause the electronic device to: acquire a medical image depicting an image of a blood vessel (the method performed includes steps of inputting an image of a target blood vessel as a angiography images 210 of a desired anatomical region; figs 1-3; paragraphs [0033-0035], [0037], [0039-0042]); determine a score for each of all frame images contained in the medical image by, for each frame image of the frame images (a score is produced by a machine learning model, the score is initiative of a likelihood/probability of each pixel in a plurality of pixels in a blood vessel image match the target blood vessel and can be used to determine in the output data if a pixel is greater than a threshold score related to the target blood vessel; paragraph [0037]): masking, in the frame image, a region of the frame image corresponding to the blood vessel (for each input image a blood vessel image segmenting apparatus is adapted to generate a plurality of candidate mask images relating to a target blood vessel by applying a plurality of blood vessel segmentation models to any blood vessel images provided; paragraphs [0039-0044]). KWEON fails to disclose and determining, based on a quantity of pixels in the masked region, the score for the frame image, wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image, and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image; and generate a representative frame image of the medical image using all of the frame images wherein the instructions, when executed by the at least one processor, cause the electronic device to generate the representative frame image of the medical image by causing the electronic device to: normalize the score for each frame image contained in the medical image within a predetermined range of values and determine a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times; and generate the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image.
MAKRAM discloses and determining, based on a quantity of pixels in the masked region, the score for the frame image (determining within the masked region of interest a contrast value, acting substantially as a score related to brightness value (intensity) of the pixels of contrast agent determined within the region of interest that pass the mask having a low band pass filter; paragraphs [0018], [0067-0070], [0074], [0077]), wherein the score for the frame image comprises a score based on an intensity of a contrast agent calculable from the frame image (the contrast value is determined based on the intensity and value of the pixels representing contrast agent injected into the veins and blood stream of the patient; paragraphs [0018], [0067-0070], [0074], [0077]), and the intensity of the contrast agent comprises a value reflecting a degree to which a region corresponding to the blood vessel injected with the contrast agent is distinguishable from the remaining region in the frame image (the intensity value of the contrast agent is determined based on a region of interested which is masked using the low pass filter in order to filter out pixels not having a contrast agent in the vein and bloodstream of a subject, the contrast value is based on an intensity of the pixels containing the agent within the region of interest; paragraphs [0018], [0067-0070], [0074], [0077]); and generate a representative frame image of the medical image using all of the frame images wherein the instructions, when executed by the at least one processor, cause the electronic device to generate the representative frame image of the medical image by causing the electronic device to (the system is further adapted to generate representative image frames of the region of interest only displaying the pixels and veins/anatomy comprising the contrast agent in the medical image; fig 3; paragraphs [0025], [0053-0055], [0074-0077]): normalize the score for each frame image contained in the medical image within a predetermined range of values and determine a weight for each frame image contained in the medical image by repeatedly exponentiating the normalized score multiple times (the contrast values acting as the scores are normalized using weighted normalized sums of the contrast values derived using provided exponential equations using the medical images input to the system comprising veins having contrast agent introduced to them; paragraphs [0073-0077]); and generate the representative frame image of the medical image by performing a weighted summation of all of the frame images using the weight for each frame image (the representative images are generated of the subject anatomy comprising the contrast agent and in this case would be the veins injected with said agent; fig 3 and 6; paragraphs [0053-0055], [0067], [0074-0077]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify KWEON to have normalized score/values representing the pixels of the region of interest contrast agent is present of MAKRAM reference. The Suggestion/motivation for doing so would have been to provide space regularization of complex images such as the cardiovascular veins of the subject as suggested by MAKRAM at paragraph [0070]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine MAKRAM with KWEON to obtain the invention as specified in claim 15.
As per claim 16, KWEON in view of MAKRAM discloses the non-transitory computer-readable recording medium of claim 14. Modified KWEON further discloses wherein the instructions, when executed by the at least one processor, cause the electronic device to determine the score for each of all frame images by causing the electronic device to: determine, using a machine learning model, the score for each of all frame images (the score values relating to error score and the probability score indicating a particular pixel is indicative of a target blood vessel are both computed using the machine learning model; paragraphs [0037-0040]).
As per claim 17, KWEON in view of MAKRAM discloses the non-transitory computer-readable recording medium of claim 14. Modified KWEON further discloses wherein the instructions, when executed by the at least one processor, cause the electronic device to determine the score for each of all frame images based on at least one of: a similarity between the frame image and other frame images, or an image quality of the frame image (the error evaluation criterion are used to generate each candidate mask and the criterion and related weights can be adjusted and include which pixels having similar information/criterion values allowing the system to compare the image to the desired target vessel values for the masked image frames; paragraphs [0037], [0048], [0080-0083]).
As per claim 18, KWEON in view of MAKRAM discloses the non-transitory computer-readable recording medium of claim 14. Modified KWEON further discloses wherein the instructions, when executed by the at least one processor, cause the electronic device to determine the score for each of all frame images based on a size of the masked region (the segmented masked blood vessel images include interconnected blobs which may contain pixels indicating a target blood vessel wherein the blobs of the masked image region are sorted/removed via size threshold; paragraphs [0037-0040], [0056], [0080-0083]).
As per claim 19, KWEON in view of MAKRAM discloses the electronic device of claim 15. Modified KWEON further discloses wherein the instructions, when executed by the at least one processor, cause the electronic device to determine the score for each of the plurality of all frame images by causing the electronic device to: determine, using a machine learning model, the score for each of all frame images (the score values relating to error score and the probability score indicating a particular pixel is indicative of a target blood vessel are both computed using the machine learning model; paragraphs [0037-0040]).
As per claim 20, KWEON in view of MAKRAM discloses the electronic device of claim 15. Modified KWEON further discloses wherein the instructions, when executed by the at least one processor, cause the electronic device to determine the score for each of all frame images based on at least one of: a similarity between the frame image and other frame images, or an image quality of the frame image (the error evaluation criterion are used to generate each candidate mask and the criterion and related weights can be adjusted and include which pixels having similar information/criterion values allowing the system to compare the image to the desired target vessel values for the masked image frames; paragraphs [0037], [0048], [0080-0083]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/Jonathan S Lee/Primary Examiner, Art Unit 2677