DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 4 is objected to because of the following informalities: Lines 4 - 5 of claim 4 recite, in part, “least one RBC; determining, by the processor, the plurality of” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --least one RBC; and determining, by the processor, the plurality of-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 4 is objected to because of the following informalities: Lines 10 - 11 of claim 4 recite, in part, “at least one RBC image; generating, by the processor, a first set” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --at least one RBC image; and generating, by the processor, a first set-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Lines 4 - 5 of claim 11 recite, in part, “least one RBC; determine the plurality of images” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --least one RBC; and determine the plurality of images-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 11 is objected to because of the following informalities: Lines 10 - 11 of claim 11 recite, in part, “at least one RBC image; generating a first set” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --at least one RBC image; and generating a first set-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 14 is objected to because of the following informalities: Lines 1 - 2 of claim 14 recite, in part, “is configured to: Classify each of the plurality” which appears to contain a typographical error and/or a minor informality. The Examiner suggests amending the claims to -is configured to: classify each of the plurality-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 18 is objected to because of the following informalities: Lines 5 - 6 of claim 18 recite, in part, “least one RBC; determine the plurality of images” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --least one RBC; and determine the plurality of images-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim 18 is objected to because of the following informalities: Lines 11 - 12 of claim 18 recite, in part, “one RBC image; generating a first set” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claims to --one RBC image; and generating a first set-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 7, 8, 10, 14, 15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. U.S. Publication No. 2018/0211380 A1 in view of El-Zehiry et al. U.S. Publication No. 2017/0132450 A1.
- The Examiner notes, with regards to claims 15 - 20, that claims 15 - 20 do not positively recite an interrelationship between the computer-executable instructions and an intended computer system for executing the computer-executable instructions and absent such a positively recited interrelationship the broadest reasonable interpretation of the limitations that the computer-executable instructions are intended to perform encompass interpretations wherein those limitations are non-functional because the claims do not limit the computer-executable instructions to an embodiment wherein the computer-executable instructions are executed by an intended computer system in order to perform its recited limitations.
- With regards to claims 1, 8 and 15, Tandon et al. disclose a method of analysing a blood smear image, (Tandon et al., Figs. 4A - 8, 10, 12, 18, 19, 24 & 25, Pg. 5 ¶ 0093 and 0096, Pg. 6 ¶ 0143 - 0147, Pg. 8 ¶ 0160 - Pg. 9 ¶ 0171, Pg. 10 ¶ 0193 - 0194, , Pg. 11 ¶ 0205 - 0207, Pg. 12 ¶ 0210 - 0217) a system of analysing a blood smear image, (Tandon et al., Figs. 4A - 8, 10, 12, 18, 19, 24 & 25, Pg. 5 ¶ 0093 and 0096, Pg. 6 ¶ 0143 - 0147, Pg. 8 ¶ 0160 - Pg. 9 ¶ 0171, Pg. 10 ¶ 0193 - 0194, , Pg. 11 ¶ 0205 - 0207, Pg. 12 ¶ 0210 - 0217, Pg. 22 ¶ 0380 - Pg. 23 ¶ 0388) comprising: a processor; (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) and a memory communicably coupled to the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) wherein the memory stores processor-executable instructions, which, on executing by the processor, cause the processor to [perform operations], (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) and a non-transitory computer-readable medium storing computer-executable instructions for analysing a blood smear image, (Tandon et al., Figs. 4A - 8, 10, 12, 18, 19, 24 & 25, Pg. 3 ¶ 0038, Pg. 5 ¶ 0093 and 0096, Pg. 6 ¶ 0143 - 0147, Pg. 8 ¶ 0160 - Pg. 9 ¶ 0171, Pg. 10 ¶ 0193 - 0194, , Pg. 11 ¶ 0205 - 0207, Pg. 12 ¶ 0210 - 0217, Pg. 22 ¶ 0381) the computer-executable instructions configured for: detecting, by a processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) a plurality of blood cells in the blood smear image based on edge detection of the plurality of blood cells, (Tandon et al., Figs. 5, 12, 24 & 25, Pg. 6 ¶ 0143 and 0146 - 0147, Pg. 14 ¶ 0260 - 0263, Pg. 15 ¶ 0273 - 0277, Pg. 16 ¶ 0286 - 0290, Pg. 18 ¶ 0331 - 0332) wherein the edge detection of the plurality of blood cells in the blood smear image is based on a preprocessing of the blood smear image; (Tandon et al., Pg. 14 ¶ 0260 - 0268, Pg. 16 ¶ 0286 - 0290, Pg. 18 ¶ 0330 - 0332) determining, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) contours of each of the plurality of blood cells based on the edge detection of the plurality of blood-cells; (Tandon et al., Pg. 6 ¶ 0146 - 0147, Pg. 14 ¶ 0262 - 0263, Pg. 15 ¶ 0275 - 0276, Pg. 16 ¶ 0286 - 0290, Pg. 18 ¶ 0331 - 0332) determining, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) a bounding box for each of the plurality of blood-cells based on the contours of each of the plurality of blood-cells; (Tandon et al., Figs. 23 - 25, Pg. 6 ¶ 0146 - 0147, Pg. 14 ¶ 0262 - 0263, Pg. 16 ¶ 0285 - 0290, Pg. 18 ¶ 0330 - 0332) classifying, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) each of the plurality of blood-cells as one of a white blood cell (WBC) or a red blood cell (RBC) using a deep learning model, (Tandon et al., Figs. 25, Pg. 7 ¶ 0150 - 0152, Pg. 11 ¶ 0206 - 0207, Pg. 12 ¶ 0210 - 0219, Pg. 13 ¶ 0221 - 0236, Pg. 16 ¶ 0296 - Pg. 17 ¶ 0313, Pg. 18 ¶ 0336) wherein the deep learning model is trained based on training data comprising a plurality of images of WBCs and RBCs; (Tandon et al., Figs. 7 & 8, Pg. 12 ¶ 0210 - 0219, Pg. 13 ¶ 0221 - 0236, Pg. 14 ¶ 0253 - 0256) determining, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) a count of WBCs, (Tandon et al., Pg. 11 ¶ 0205 - 0207, Pg. 12 ¶ 0210, Pg. 21 ¶ 0363 - 0370) and a count of RBCs (Tandon et al., Pg. 11 ¶ 0205 - 0207, Pg. 12 ¶ 0210 - 0211, Pg. 18 ¶ 0336, Pg. 21 ¶ 0365) based on the classification and the contours of each of the plurality of blood-cells; (Tandon et al., Pg. 7 ¶ 0150 - 0154, Pg. 8 ¶ 0160 - 0163, Pg. 11 ¶ 0205 - Pg. 12 ¶ 0211, Pg. 16 ¶ 0285 - Pg. 17 ¶ 0298, Pg. 18 ¶ 0329 - 0336, Pg. 21 ¶ 0363 - 0365) and outputting, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) a report comprising the count of WBC, the count of RBCs or the volumetric information of the RBCs and the WBCs. (Tandon et al., Figs. 4A, 6 & 7, Pg. 7 ¶ 0150 - 0151, Pg. 8 ¶ 0160 - 0165, Pg. 11 ¶ 0205 - Pg. 12 ¶ 0211, Pg. 18 ¶ 0334 - 0336, Pg. 21 ¶ 0362 - 0369) Tandon et al. fail to disclose explicitly determining volumetric information of the RBCs and the WBCs. Pertaining to analogous art, El-Zehiry et al. disclose analysing a blood smear image, (El-Zehiry et al., Abstract, Figs. 1, 2, 5 - 7 & 14, Pg. 2 ¶ 0030 - Pg. 3 ¶ 0032, Pg. 3 ¶ 0038 - 0042) comprising: detecting a plurality of blood cells in the blood smear image based on edge detection of the plurality of blood cells, (El-Zehiry et al., Figs. 1, 2, 5 & 6, Pg. 2 ¶ 0031 - Pg. 3 ¶ 0032, Pg. 3 ¶ 0039, Pg. 3 ¶ 0042 - Pg. 4 ¶ 0043) wherein the edge detection of the plurality of blood cells in the blood smear image is based on a preprocessing of the blood smear image; (El-Zehiry et al., Figs. 1, 2, 5 & 6, Pg. 2 ¶ 0031 - Pg. 3 ¶ 0032, Pg. 3 ¶ 0039, Pg. 3 ¶ 0042 - Pg. 4 ¶ 0043) determining contours of each of the plurality of blood cells based on the edge detection of the plurality of blood-cells; (El-Zehiry et al., Figs. 1, 2, 5 & 6, Pg. 2 ¶ 0031 - Pg. 3 ¶ 0032, Pg. 3 ¶ 0039, Pg. 3 ¶ 0042 - Pg. 4 ¶ 0043) determining a bounding box for each of the plurality of blood-cells based on the contours of each of the plurality of blood-cells; (El-Zehiry et al., Figs. 1, 2, 5 & 6, Pg. 2 ¶ 0031 - Pg. 3 ¶ 0032, Pg. 3 ¶ 0039, Pg. 3 ¶ 0042 - Pg. 4 ¶ 0043) determining a count of WBCs, a count of RBCs and volumetric information of the RBCs and the WBCs based on the classification and the contours of each of the plurality of blood-cells; (El-Zehiry et al., Figs. 1, 2, 5 & 6, Pg. 2 ¶ 0030 - 0031, Pg. 3 ¶ 0038 - 0042, Pg. 5 ¶ 0055 - 0059) and outputting a report comprising the count of WBCs, the count of RBCs or the volumetric information of the RBCs and the WBCs. (El-Zehiry et al., Fig. 14, Pg. 3 ¶ 0038 - 0042, Pg. 5 ¶ 0055 - 0059) Tandon et al. and El-Zehiry et al. are combinable because they are both directed towards image processing systems that process, classify and analyze blood cells. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Tandon et al. with the teachings of El-Zehiry et al. This modification would have been prompted in order to enhance the base device of Tandon et al. with the well-known and applicable technique El-Zehiry et al. applied to a comparable device. Determining volumetric information of the RBCs and the WBCs, as taught by El-Zehiry et al., would enhance the base device of Tandon et al. by providing it with additional information that it can use when classifying the health condition of a blood sample so as to improve the overall quality and reliability of any automated diagnoses it makes. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that volumetric information of the RBCs and the WBCs would be determined along with counts of the RBCs and the WBCs so as to provide the base device of Tandon et al. and its end-users with as much information as possible when evaluating the health condition of a blood sample. Therefore, it would have been obvious to combine Tandon et al. with El-Zehiry et al. to obtain the invention as specified in claims 1, 8 and 15.
- With regards to claims 3, 10 and 17, Tandon et al. in view of El-Zehiry et al. disclose the method, system and non-transitory computer-readable medium of claims 1, 8 and 15, respectively, wherein the edges of each of the plurality of blood cells are determined by determining a bimodal image of the blood smear image upon the preprocessing, (Tandon et al., Figs. 10 - 12, Pg. 6 ¶ 0146 - 0147, Pg. 8 ¶ 0166 - 0169, Pg. 14 ¶ 0262 - 0268, Pg. 15 ¶ 0274 - 0276, Pg. 16 ¶ 0284 - 0290) wherein the edges of each of the plurality of blood cells are determined based on the bimodal image by using an edge detection technique. (Tandon et al., Figs. 10 - 12, Pg. 6 ¶ 0146 - 0147, Pg. 8 ¶ 0166 - 0169, Pg. 14 ¶ 0262 - 0268, Pg. 15 ¶ 0274 - 0276, Pg. 16 ¶ 0284 - 0290)
- With regards to claims 7 and 14, Tandon et al. in view of El-Zehiry et al. disclose the method and system of claims 1 and 8, respectively, comprising: classifying, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) each of the plurality of blood-cells classified as the WBC as one of a plurality of WBC classes using the deep learning model, wherein each of the plurality of WBC classes correspond to a type of WBC from a plurality of WBC types. (Tandon et al., Fig. 25, Pg. 7 ¶ 0150 - 0152, Pg. 12 ¶ 0218 - 0219, Pg. 13 ¶ 0221 and 0231 - 0236, Pg. 16 ¶ 0296 - Pg. 17 ¶ 0298, Pg. 17 ¶ 0306 - 0313, Pg. 21 ¶ 0363 - 0368)
Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. U.S. Publication No. 2018/0211380 A1 in view of El-Zehiry et al. U.S. Publication No. 2017/0132450 A1 as applied to claims 1, 8 and 15 above, and further in view of Murphy et al. U.S. Publication No. 2018/0328848 A1.
- With regards to claims 2, 9 and 16, Tandon et al. in view of El-Zehiry et al. disclose the method, system and non-transitory computer-readable medium of claims 1, 8 and 15, respectively, wherein the preprocessing comprises: enhancing contrast, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) of the blood smear image; (Tandon et al., Figs. 10 - 12, Pg. 6 ¶ 0146 - 0147, Pg. 8 ¶ 0166 - 0169, Pg. 14 ¶ 0262 - 0268, Pg. 15 ¶ 0274 - 0276, Pg. 16 ¶ 0284 - 0290) and removing noise, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) in the blood smear image. (Tandon et al., Pg. 14 ¶ 0262 - 0268) Tandon et al. fail to disclose explicitly using a histogram equalization technique; and removing noise in the contrast enhanced blood smear image using a gaussian filter. Pertaining to analogous art, Murphy et al. disclose wherein the preprocessing comprises: removing noise in the contrast enhanced blood smear image using a gaussian filter. (Murphy et al., Figs. 4, 5 & 7A, Pg. 7 ¶ 0083 - 0086 and 0088 - 0089) Murphy et al. fail to disclose explicitly using a histogram equalization technique. However, the Examiner takes official notice of the fact that utilizing a histogram equalization technique when preprocessing digital images is notoriously well-known in the art. Tandon et al. in view of El-Zehiry et al. and Murphy et al. are combinable because they are all directed towards image processing systems that process, classify and analyze blood cells. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Tandon et al. in view of El-Zehiry et al. with the teachings of Murphy et al. This modification would have been prompted in order to enhance the combined base device of Tandon et al. in view of El-Zehiry et al. with the well-known and applicable technique Murphy et al. applied to a similar device. Removing noise in the contrast enhanced blood smear image using a gaussian filter, as taught by Murphy et al., would enhance the combined base device by improving its ability to accurately and reliably classify and analyze images of blood cells since as much erroneous image data as possible would be removed from the images and thus prevented from affecting classification and analysis results of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a gaussian filter would be utilized to remove noise from the contrast enhanced blood smear image so as to improve the ability of the combined base device to accurately and reliably classify and analyze images of blood cells. In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Tandon et al. in view of El-Zehiry et al. in view of Murphy et al. to include a histogram equalization technique during image preprocessing. This modification would have been prompted in order to enhance the ability of the combined base device of Tandon et al. in view of El-Zehiry et al. in view of Murphy et al. with the notoriously well-known technique of utilizing histogram equalization to enhance the quality of images and improve discrimination between imaged objects. Utilizing a histogram equalization technique to enhance the contrast of the blood smear image would enhance the combined base device improving the contrast of the blood smear images such that imaged blood cells stand out more from the background thereby improving the ability of the combined base device to accurately and reliably detect, segment and analyze individual blood cells in the blood smear images. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a histogram equalization technique would be utilized to enhance the quality of the blood smear images processed by the combined base device. Therefore, it would have been obvious to combine Tandon et al. in view of El-Zehiry et al. with Murphy et al. and the notoriously well-known technique of utilizing a histogram equalization technique to preprocess digital images to obtain the invention as specified in claims 2, 9 and 16.
Claims 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. U.S. Publication No. 2018/0211380 A1 in view of El-Zehiry et al. U.S. Publication No. 2017/0132450 A1 as applied to claims 1, 8 and 15 above, and further in view of Jung et al., "WBC image classification and generative models based on convolutional neural network", BMC Medical Imaging, Vol. 22, No. 94, May 2022, pages 1 - 16 in view of Soni et al. U.S. Publication No. 2020/0311913 A1.
- With regards to claims 4, 11 and 18, Tandon et al. in view of El-Zehiry et al. disclose the method, system and non-transitory computer-readable medium of claims 1, 8 and 15, respectively, wherein the training data is generated by: inputting, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) at least one training image to the deep learning model, (Tandon et al., Pg. 12 ¶ 0212 - 0219, Pg. 13 ¶ 0221, Pg. 14 ¶ 0253 - 0259, Pg. 16 ¶ 0289 - 0296) wherein the at least one training image comprises at least one WBC and/or at least one RBC; (Tandon et al., Pg. 12 ¶ 0212 - 0219, Pg. 13 ¶ 0221, Pg. 14 ¶ 0253 - 0259, Pg. 16 ¶ 0289 - 0296) determining, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) the plurality of images of WBCs and RBCs (Tandon et al., Pg. 12 ¶ 0212 - 0219, Pg. 13 ¶ 0221, Pg. 14 ¶ 0253 - 0259, Pg. 16 ¶ 0289 - 0296) by: determining, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) bounding boxes for each of the at least one WBC and/or the at least one RBC based on detection of contours of the at least one WBC and the at least one RBC in the at least one training image; (Tandon et al., Figs. 23 - 25, Pg. 6 ¶ 0146 - 0147, Pg. 14 ¶ 0262 - 0263, Pg. 16 ¶ 0285 - 0290, Pg. 18 ¶ 0330 - 0332) cropping, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) each of the bounding boxes to determine at least one WBC image and at least one RBC image. (Tandon et al., Figs. 23 - 25, Pg. 6 ¶ 0146 - 0147, Pg. 12 ¶ 0212 - 0219, Pg. 13 ¶ 0221, Pg. 14 ¶ 0253 - 0259, Pg. 16 ¶ 0285 - 0296, Pg. 18 ¶ 0330 - 0334) Tandon et al. fail to disclose explicitly generating a first set of samples corresponding to WBCs and a second set of samples corresponding to RBCs based on the at least one WBC image and the at least one RBC image respectively using a generator model of the deep learning model, wherein the first set of samples and the second set of samples are generated based on a classification of each sample as one of real sample or fake sample using a discriminator model of the deep learning model, wherein samples corresponding to the first set of samples and the second set of samples are generated until the first set of samples is balanced with respect to the second set of samples and each of the first set of samples and the second set of samples are classified as real by the discriminator model; and wherein the plurality of images of WBCs and RBCs are determined based on the first set of samples and the second set of samples, respectively. Pertaining to analogous art, Jung et al. disclose wherein the training data is generated by: inputting at least one training image to the deep learning model, wherein the at least one training image comprises at least one WBC and/or at least one RBC; (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) determining the plurality of images of WBCs and RBCs by: determining bounding boxes for each of the at least one WBC and/or the at least one RBC based on the at least one WBC and the at least one RBC in the at least one training image; (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) cropping each of the bounding boxes to determine at least one WBC image and at least one RBC image; (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) generating a first set of samples corresponding to WBCs and a second set of samples corresponding to RBCs based on the at least one WBC image and the at least one RBC image respectively using a generator model of the deep learning model, (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) wherein the first set of samples and the second set of samples are generated based on a classification of each sample as one of real sample or fake sample using a discriminator model of the deep learning model, (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) wherein samples corresponding to the first set of samples and the second set of samples are generated until the first set of samples is balanced with respect to the second set of samples; (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) and wherein the plurality of images of WBCs and RBCs are determined based on the first set of samples and the second set of samples, respectively. (Jung et al., Pg. 1 Abstract, Pg. 2 Subsection “Contributions”, Pg. 5 Subsection “Pre-processing of WBC images”, Pg. 5 Figs. 1 & 2, Pg. 10 Subsection “Dataset sharing” - Pg. 12 Subsection “Conclusion”, Pg. 11 Fig. 4) Jung et al. fail to disclose expressly wherein each of the first set of samples and the second set of samples are classified as real by the discriminator model. Pertaining to analogous art, Soni et al. disclose wherein the training data is generated by: generating a first set of samples and a second set of samples using a generator model of the deep learning model, (Soni et al., Abstract, Figs. 1, 3 - 6 & 10, Pg. 2 ¶ 0019, Pg. 3 ¶ 0026, Pg. 4 ¶ 0040 - 0041, Pg. 6 ¶ 0058 - Pg. 7 ¶ 0060) wherein the first set of samples and the second set of samples are generated based on a classification of each sample as one of real sample or fake sample using a discriminator model of the deep learning model, (Soni et al., Abstract, Figs. 1, 3 - 6 & 10, Pg. 2 ¶ 0019, Pg. 3 ¶ 0026, Pg. 4 ¶ 0040 - 0041, Pg. 6 ¶ 0058 - Pg. 7 ¶ 0060) and wherein the plurality of images are determined based on the first set of samples and the second set of samples. (Soni et al., Abstract, Figs. 1, 3 - 6 & 10, Pg. 2 ¶ 0019, Pg. 3 ¶ 0026, Pg. 4 ¶ 0040 - 0041, Pg. 6 ¶ 0058 - Pg. 7 ¶ 0060) Tandon et al. in view of El-Zehiry et al. and Jung et al. are combinable because they are all directed towards image processing systems that process and classify blood cells. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Tandon et al. in view of El-Zehiry et al. with the teachings of Jung et al. This modification would have been prompted in order to enhance the combined base device of Tandon et al. in view of El-Zehiry et al. with the well-known technique Jung et al. applied to a comparable device. Utilizing a generator model of the deep learning model to generate training data until the first set of samples is balanced with respect to the second set of samples, as taught by Jung et al., would enhance the combined base device by improving its ability to produce a deep learning model this able to accurately and reliably classify blood cells since it would be able to easily obtain enough training images for each category of object to be classified to ensure that the deep learning model is properly trained. Furthermore, this modification would have been prompted by the teachings and suggestions of El-Zehiry et al. that the training set used to train their machine learning model had 100 images per category, see at least page 5 paragraph 0053 of El-Zehiry et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a generator model of the deep learning model would be utilized to generate training data until the first set of samples is balanced with respect to the second set of samples so as to ensure that sufficient examples of each category of object to be classified by the deep learning model is readily and easily available to ensure that the deep learning model is properly trained to provide accurate and reliable classification of blood cell images. In addition, Tandon et al. in view of El-Zehiry et al. in view of Jung et al. and Soni et al. are combinable because they are all directed towards image processing systems that utilize machine learning models to evaluate medical images. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Tandon et al. in view of El-Zehiry et al. in view of Jung et al. with the teachings of Soni et al. This modification would have been prompted in order to enhance the combined base device of Tandon et al. in view of El-Zehiry et al. in view of Jung et al. with the well-known technique Soni et al. applied to a similar device. Utilizing first and second sets of samples classified as real by the discriminator model as training data, as taught by Soni et al., would enhance the combined base device by improving its ability to accurately and reliably classify blood cells since only the most realistic and convincing looking synthetically generated images would be utilized when training the deep learning model. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that only first and second sets of samples classified as real by the discriminator model would be utilized as the training data so as to ensure that only the highest quality synthetically generated images are used to train the deep learning model of the combined base device. Therefore, it would have been obvious to combine Tandon et al. in view of El-Zehiry et al. with Jung et al. and Soni et al. to obtain the invention as specified in claims 4, 11 and 18.
Claims 5, 6, 12, 13, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Tandon et al. U.S. Publication No. 2018/0211380 A1 in view of El-Zehiry et al. U.S. Publication No. 2017/0132450 A1 as applied to claims 1, 8 and 15 above, and further in view of Park et al. U.S. Publication No. 2023/0030787 A1.
- With regards to claims 5, 12 and 19, Tandon et al. in view of El-Zehiry et al. disclose the method, system and non-transitory computer-readable medium of claims 1, 8 and 15, respectively, comprises: displaying, by the processor, (Tandon et al., Fig. 4A, Pg. 3 ¶ 0038, Pg. 10 ¶ 0194 - 0197, Pg. 22 ¶ 0381 - Pg. 23 ¶ 0388) an analysis received from the system on a display screen. (Tandon et al., Pg. 7 ¶ 0150 - 0152, Pg. 8 ¶ 0162 - 0166, Pg. 11 ¶ 0205 - Pg. 12 ¶ 0211, Pg. 16 ¶ 0292 - 0293, Pg. 21 ¶ 0362 - 0365) Tandon et al. fail to disclose explicitly inputting the report as a query to a generative artificial intelligence-based query system; and an analysis received from the generative artificial intelligence-based query system. Pertaining to analogous art, Park et al. disclose inputting the report as a query to a generative artificial intelligence-based query system; (Park et al., Abstract, Figs. 2 - 5, Pg. 2 ¶ 0027 - 0028, Pg. 3 ¶ 0036 - 0044, Pg. 4 ¶ 0053 - 0060, Pg. 4 ¶ 0063 - Pg. 5 ¶ 0071) and displaying an analysis received from the generative artificial intelligence-based query system on a display screen. (Park et al., Abstract, Figs. 2 - 5, Pg. 2 ¶ 0027 - 0028, Pg. 3 ¶ 0036 - 0044, Pg. 4 ¶ 0053 - 0060, Pg. 4 ¶ 0063 - Pg. 5 ¶ 0071) Tandon et al. in view of El-Zehiry et al. and Park et al. are combinable because they are all directed towards systems and methods that utilize machine learning models and WBC counts to evaluate the health status of an individual. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Tandon et al. in view of El-Zehiry et al. with the teachings of Park et al. This modification would have been prompted in order to enhance the combined base device of Tandon et al. in view of El-Zehiry et al. with the well-known technique Park et al. applied to a comparable device. Inputting the report as a query to a generative artificial intelligence-based query system, as taught by Park et al., would enhance the combined base device by allowing for end-users to utilize the WBC and RBC metrics it generated for various other purposes and in an increased number of applications, such as by inputting them into additional machine learning models, in order to improve the overall usefulness and appeal of the combined base device to potential end-users. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the report generated by the combined base device would be input as a query to a generative artificial intelligence-based query system so as to allow for end-users to utilize the WBC and RBC metrics it generated for various other purposes and in an increased number of applications and thereby improve the overall usefulness and appeal of the combined base device to potential end-users. Therefore, it would have been obvious to combine Tandon et al. in view of El-Zehiry et al. with Park et al. to obtain the invention as specified in claims 5, 12 and 19.
- With regards to claims 6, 13 and 20, Tandon et al. in view of El-Zehiry et al. in view of Park et al. disclose the method, system and non-transitory computer-readable medium of claims 5, 12 and 19, respectively, wherein the analysis is based on the count of WBCs, the count of RBCs or the volumetric information of the RBCs and WBCs, (Tandon et al., Abstract, Pg. 21 ¶ 0363 - Pg. 22 ¶ 0379) wherein the analysis comprises one or more health conditions determined based on a comparison of the count of WBCs, the count of RBCs or the volumetric information of the RBCs and the WBCs with a corresponding predefined threshold count of WBCs, a predefined threshold count of RBCs or a predefined volumetric threshold of RBCs and WBCs. (Tandon et al., Abstract, Pg. 21 ¶ 0363 - Pg. 22 ¶ 0379)
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Praljak et al. U.S. Publication No. 2023/0221239 A1; which is directed towards image processing systems and methods for evaluating images of blood cells, wherein a machine learning model is utilized to classify blood cells of interest in an image into various classes of blood cells.
Wang et al. U.S. Publication No. 2021/0312243 A1; which is directed towards a medical image processing method and system, wherein a conditional generative adversarial neural network is trained to synthesize realistic blood cell images.
Ye et al. U.S. Publication No. 2022/0291196 A1; which is directed towards an image processing method and system for blood cell analysis, wherein a number of target cells and a number of reference cells are automatically identified from a cell image of a blood specimen.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677