DETAILED ACTION
Notice of AIA Status
The present application, filed on 3/19/21, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/9/2026 has been entered.
Status of Claims
Claims 1-16 and 22-23 are rejected.
Claims 1-16 and 22-23 are objected to.
Claims 17-18 and 24 are withdrawn.
Election/Restrictions
Newly submitted claim 24 directed to an invention that lacks unity with the invention originally claimed for the following reasons:
Group I, claims 1-16 and 22-23, drawn to a method of identifying objects of a specimen container.
Group II, claims 17-18, drawn to a quality check module.
Group III, claim 24, drawn to a specimen testing apparatus.
The pending groups of inventions listed above do not relate to a single general inventive concept under PCT Rule 13.1 because, under PCT Rule 13.2, they lack the same or corresponding special technical features for the following reasons:
Groups I, II, & III lack unity of invention because even though the inventions of these groups require the technical feature of “identifying one or more objects of a specimen container using one or more neural networks; displaying an image of the specimen container”, this technical feature is not a special technical feature as it does not make a contribution over the prior art in view of Wissmann (WO2017132172A1, cited in the 03/19/2021 IDS) which teaches identifying one or more selected objects from the one or more objects (see [00103] and [00115])) using one or more neural networks (see [00106]); and displaying an image of the specimen container (see [0062], [00103], [00105-00106]).
Since applicant has received an action on the merits for the originally presented invention, this invention has been constructively elected by original presentation for prosecution on the merits. Accordingly, claims 17-18 and 24 are withdrawn from consideration as being directed to nonelected inventions. See 37 CFR 1.142(b) and MPEP § 821.03.
To preserve a right to petition, the reply to this action must distinctly and specifically point out supposed errors in the restriction requirement. Otherwise, the election shall be treated as a final election without traverse. Traversal must be timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are subsequently added, applicant must indicate which of the subsequently added claims are readable upon the elected invention.
Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention.
Response to Claim Objections
The applicant’s remarks directed towards the objection of claims 1-16 and 22-23 under 35 U.S.C. §112 are that claims 1-16 and 22-23 have been amended to address informalities. However, claims 1-16 and 22-23 in light of the amendments filed 3/9/2026 still include informalities. As a result, the objection to claims 1-16 and 22-23 is respectfully maintained.
Response to Arguments - Claim Rejections - 35 USC § 112
The applicant’s remarks directed towards the rejection of claims 1-16 and 22-23 under 35 U.S.C. §112 are that the claims have been amended to make clearer the recited features of applicant’s invention. However, the claims in light of the amendments filed 3/9/2026 remain indefinite within the meaning of 35 U.S.C. §112. As a result, the rejection of claims 1-16 and 22-23 under 35 U.S.C. §112 is respectfully maintained.
Response to Arguments - Claim Rejections - 35 USC § 101
The applicant’s remarks directed towards the rejection of claims 1-16 and 22-23 under 35 U.S.C. §101 are that independent claim 1 has been amended to make clearer that the claims aren’t directed to an abstract idea. However, claims 1-16 and 22-23 in light of the amendments filed 3/9/26 remain directed towards an abstract idea within the meaning of 35 U.S.C. §101. As a result, the rejection of claims 1-16 and 22-23 under 35 U.S.C. §101 is respectfully maintained.
Response to Arguments - Claim Rejections - 35 USC § 103
The applicant submits claims 1-16 and 22-23 aren’t unpatentable under 35 U.S.C. §103 by traversing the rejections.
Specifically, the applicant argues that Wissmann does not disclose or suggest
"superimposing one of a first or second graphic on the display over an area on the displayed image of the specimen container identified by the one or more trained neural networks as comprising the first class of pixels representing the first object, wherein the one of the first and second graphics uniquely identifies the area to provide a visual verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
In addition, the applicant states “Applicant has found no disclosure or suggestion in Ramsay of display techniques, superimposed graphics, or visualized verification of neural network determinations”. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Specifically, the applicant argues that Prideaux-Ghee doesn’t teach identifying a same pixel class using two different alternative graphics to indicate pixels relied upon by a trained neural network. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., identifying a same pixel class using two different alternative graphics to indicate pixels relied upon by a trained neural network) are not recited in the rejected claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
In addition, the applicant argues that Prideaux-Ghee isn’t analogous art. In response to applicant's argument that that Prideaux-Ghee is nonanalogous art, it has been held that a prior art reference must either be in the field of the inventor’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, Prideaux-Ghe is in the analogous art of identifying objects in an image (see [0029] of Prideaux-Ghe).
In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971).
As a result, claims 1-16 and 22-23 in light of the amendment filed 3/9/26 remain unpatentable under 35 U.S.C. §103.
Claim Objections
Claims 1-16 and 22-23 are objected to because of the following informalities:
The preamble of claim 1 recites “a method of identifying one or more objects of a specimen container” which is an intermediate step of claim 1. For the sake of clarity, consider rephrasing such that the preamble of claim instead recites the ultimate step or result obtained from performing claim 1.
Claim 1 recites “capturing one or more images of the specimen container, the one or more images comprising a plurality of pixels and including one or more objects of the specimen container”. For the sake of clarity, consider rephrasing to ‘capturing one or more specimen container images including a plurality of pixels and one or more object images.
Claim 1 recites “the capturing of the one or more images comprising generating pixel data from the plurality of pixels”. For the sake of clarity, consider the capturing comprising generating pixel data from the plurality of pixels’.
Claim 1 recites “displaying on a display an image from the one or more captured images of the specimen container, the displayed image including the first object”. For the sake of clarity, consider rephrasing to ‘displaying, on a display, a displayed specimen container image derived from the one or more captured specimen container images, the displayed specimen container image including a displayed first object image'.
Claims dependent on an objected base claim are objected to because any claim in dependent form is construed to incorporate by reference all the limitations of the claim to which it refers.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-16 and 22-23 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 1 recites “identifying a first class of pixels having one or more particular wavelengths of light and individual pixel locations proximate to each other using one or more trained neural networks executing on a computer, the first class of pixels representing the first object, the one or more trained neural networks identifying and locating the first object from the one or more objects as imaged in the one or more captured images of the specimen container by processing the pixel data to identify the one or more particular wavelengths of light and the individual pixel locations”. The limitation is phrased in a manner that creates ambiguity about what operation the neural network are performing and in what order. There is circularity and ambiguity. The limitation expresses three distinct operations in a long parallel nested clause: identify wavelength, group pixels and identify object. So the claim reads: identifying pixels having wavelength and location using neural networks identifying object by processing pixel data to identify wavelength and location, such that a person of ordinary skill in the art cannot determine which limitations defines the operative step and which is a result or an intermediate result. For example, it isn’t clear if the neural network (1) identifies wavelengths then identifies pixels then identifies the object, or (2) identifies class of pixels then identifies object identification then identifies wavelengths, or (3) the object is identified then wavelength and pixel location are later determined then the class of pixels. Consider braking the operations into separate steps orderly, for example:
processing the generated pixel data by executing, on a computer, one or more trained neural networks;
identifying one or more particular wavelength of light and individual pixel locations proximate to each other using the processed pixel data by executing, on the computer, the one or more trained neural networks;
identifying a first class of pixels having the identified one or more particular wavelengths of light and the identified individual pixel locations by executing, on the computer, the one or more trained neural networks; and
identifying and locating a first object represented by the identified first class of pixels by executing, on the computer, the one or more trained neural networks.
Claim 1 recites “the displayed image of the specimen container identified by the one or more trained neural networks as comprising the first class of pixels representing the first object”. However, claim 1 doesn’t recite identifying a displayed image of the specimen container as comprising the first class of pixels representing the first object’. Rather, the neural networks are described elsewhere in the claim as having the function of identifying the first class of pixels representing the first object. According, the modifier “identified by the one or more trained neural networks” appears to improperly modify the “displayed image of the specimen container”. Consider pointing out what is identified by the one or more trained neural networks after reciting the corresponding identifying step.
Claim 1 recites “to provide a visual verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks”. However, claim 1 states “the first class of pixels representing the first object. Therefore it isn’t clear what ‘correct identification’ is being verified because the claim already states that the first class of pixels represents the first object. Accordingly, the limitation creates ambiguity as to whether the one or more trained neural networks are required to perform an identification of the first object or merely provide a visual verification of whether such identification occurred or something else. Consider rephrasing to ‘to enable an user to verify the identification of the first class of pixels representing the first object by the one or more trained networks”.
Claims dependent on an indefinite base claim are indefinite because any claim in dependent form is construed to incorporate by reference all the limitations of the claim to which it refers.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 and 22-23 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea without adding significantly more.
Step 1: Claims 1-16 are directed towards a method process, a statutory category under 35 U.S.C. §101 (see MPEP §2106 “the claimed invention must be to one of the four statutory categories. 35 U.S.C. §101 defines the four categories of invention that Congress deemed to be the appropriate subject matter of a patent: processes, machines, manufactures and compositions of matter”).
Step 2A, Prong 1: an abstract idea is identified. Claim 1 recites the abstract idea: “identifying a first class of pixels having one or more particular wavelengths of light and locations proximate to each other using one or more trained neural networks and a computer, the first class of pixels representing a first object, the one or more trained neural networks identifying and locating the first object from the one or more objects as imaged in the one or more captured images of the specimen container by processing the pixel data to identify the one or more particular wavelengths of light and the individual pixel locations” which can be performed in the human mind. Looking to the specification, the identifying an object using a neural network mirrors the mental process of recognizing patterns and making determinations based on sensory input (see [0027] of the instant specification “the neural network may analyze the first class of pixels and determine whether the specimen container is capped”) (see [0084] of instant specification “evaluation process … output class labels during testing/evaluations) which is analogous to how a human mind recognizes objects by analyzing visually but for the use of a one or more neural networks executing on a computer programmed to automatically carry out sequence of arithmetic and logical operations) (see the USPTO ‘October 2019 Update: Subject Matter Eligibility’ Guideline “claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: • a claim to “collecting information, analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group, LLC v. Alstom, S.A.) In addition, the neural network is programmed to employ mathematical algorithms to process data and compute output determinations (see [0084] of instant specification “a statistics generation process may be undertaken in 672, … then operated on by a multi-class classifier 674 to provide identification of the pixel classes present in the images in 676”). These steps can be broken down into mathematical operations that a person is able to perform mentally, albeit arguably at a slower pace (see the USPTO ‘October 2019 Update: Subject Matter Eligibility’ Guideline “Mathematical Calculations: A claim that recites a mathematical calculation will be considered as falling within the “mathematical concepts” grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word “calculating” in order to be considered a mathematical calculation. For example, a step of “determining” a variable or number using mathematical methods or “performing” a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation. Examples of mathematical calculations recited in a claim include: • performing a resampled statistical analysis to generate a resampled distribution, SAP Am., Inc. v. InvestPic, LLC”). Thus the claimed method is directed towards mental processes specifically identifying objects by identifying patterns in image data mirror human visual recognition, which US courts have consistently deemed abstract ideas and mathematical calculations, which US courts have consistently deemed abstract ideas (see MPEP 2106 “the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper” to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, “methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’” 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)).
Step 2A, Prong 2: has the abstract idea been integrated into a particular practical application? Once the abstract idea is performed by identifying one or more objects form the one or more object, a signal is generated by displaying, namely “displaying on a display an image from the one or more captured images of the specimen container including the first object; and
superimposing one of a first or second graphic on the display over an area on the displayed image of the specimen container identified by the one or more trained neural networks as comprising the first class of pixels representing the first object, wherein the one of the first and second graphics uniquely identifies area to provide a visaukl verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks;
wherein the first graphic includes a first configuration that matches in size and configuration the identified first class of pixels representing the first object, and the second graphic includes a second configuration that is larger in size than and outlines the configuration of the identified first class of pixels representing the first object”, which isn’t particular, and instead is merely generally linking the abstract idea to the field of endeavor. MPEP 2106.05(h). The instant case is comparable to Digitech Image Tech., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344 (Fed. Cir. 2014) which found a digital image processing method ineligible because the claims did not include additional elements beyond the abstract idea of gathering and combining data (see example 5 in the “Abstract Idea Examples" in the interim guidance on patent subject matter eligibility (2014 IEG). In addition, the instance case also is comparable to the alarm in Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978) (see MPEP §2016.05 g) and amounts to insignificant post-solution activity. Displaying an image and superimposing a graph on the display area over an area of the image don’t amount to particular practical application. In addition, capturing the images is mere data gathering which is also insignificant extra-solution activity, and not a particular practical application. As a result, the superimposing is an insignificant post-solution activity because the superimposing step is simply a way of presenting data to the user. The superimposing merely displays the image including the first object having the class of pixels identified by the neural network without limiting the abstract idea, which is comparable to combining data in Digitech Image Tech., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344 (Fed. Cir. 2014). The displaying and superimposing steps are an output or result of the neural network’s identification step and isn’t integral to the identifying objects but instead the superimposing is an extra solution step that conveys the result without adding an inventive concept.
Step 2B: does the claim recite any elements which are significantly more than the abstract idea? Other than the abstract idea of identifying objects and the insignificant post-solution activity of displaying images and superimposing, claim 1 recites “capturing one or more images of the specimen container, the one or more images including one or more objects of the specimen container, the capturing generating pixel data from a plurality of pixels”. However, these other elements are well known, routine and conventional in the field and amount to insignificant extra- solution activity consisting of mere data gathering (see MPEP §2016.05 g). The USPTO ‘October 2019 Update: Subject Matter Eligibility’ Guideline identifies examples that did not integrate a judicial exception into a practical application: merely including instructions to implement the abstract idea on a computer, or using the computer as a tool to perform an abstract idea, adding insignificant extra-solution activity to the judicial exception, generally linking the use of a judicial exception to a particular technological environment or field of use.
Technological improvement: the method doesn’t improve the functionally of a computer or of neural networks technology, rather the computer is a mere conventional tool that applies the abstract idea. The use of the neural network executing on a computer doesn’t improve the functionality of the computer itself or provide a technological advancement in the neural network field. Merely using a neural network to process data gathered to identify objects such that data about the identified objects is displayed and superimposed with graphics doesn’t constitute an improvement of a computer (see MPEP 2106.05a).
None of the dependent claims solve these issue and are likewise unpatentable under 35 U.S.C. §101 (see MPEP §2016.07).
In summary, claims 1-16 and 22-23 don’t amount to significantly more than the judicial exception and as a result the claims are unpatentable under 35 U.S.C. §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5-14 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Wissmann (WO2017132172A1, cited in the 03/19/2021 Information Disclosure Statement) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
As to claim 1, Wissmann (WO2017132172A1) teaches a method of identifying objects of a specimen container (see [0031]) (see also [0037], which recites “the method may be used to identify characteristics of the specimen container, such as the container type (via identification of height and width thereof), the cap type, and/or the cap color”), (the method) comprising:
capturing one or more images of the specimen container (the Oxford dictionary defines “image” as “a visible impression obtained by a camera, telescope, microscope, or other device, or displayed on a computer or video screen” or a “mental representation or idea”, because the claim requires the capturing to generate pixel data from a plurality of pixels, capturing the image is interpreted as ‘obtaining a visible impression of the specimen container obtained by a camera, telescope, microscope, or other device, or displayed on a computer or video screen’) (see [0012] “The method includes providing a specimen container containing a specimen at an imaging location, providing one or more cameras configured to capture images at the imaging location, providing one or more light sources adjacent to the imaging location, illuminating the imaging location with the one or more light sources, and capturing multiple images including specimen images of the image location at multiple different exposure”) (see also [0013]), the one or more images comprising a plurality of pixels and including one or more objects (characteristics, cap type and color of the cap) of the specimen container (see [0037] “the method may be used to identify characteristics of the specimen container, such as the container type (via identification of height and width thereof), the cap type, and/or the cap color”), the capturing of the one or more images comprising generating pixel data from a plurality of pixels (see [0063] “a camera 440 that may be conventional digital camera capable of capturing a digital image (i.e., a pixelated image). Pixel as used herein may be a single pixel. In some instances, processing of the images by computer 143 may be by processing super pixels (a collection or grouping of pixels) to lower computational burden”);
identifying a first class of pixels (a class in [00103]) (see [00106] “a boosting classifier such as an adaptive boosting classifier (e.g., AdaBoost, Logit Boost, or the like), any artificial neural network”) (see [0040], which recites “each of the specimen and reference images may be processed by a computer in order to characterize (classify and and/or quantify) the specimen, specimen container, or both”) (see also [0041], which recites “these multiple specimen and reference images may then be further processed by a computer to generate transmittance image data sets. The transmittance image data sets may be operated upon by a multi-class classifier to yield characterization results’) having one or more particular wavelengths of light and individual pixel locations proximate to each other using one or more trained neural networks (artificial neural network in [00106]) executing on a computer (computer in [00106]), the first class of pixels representing a first object (see [00103], which recites “for each transmittance 2D data set for each viewpoint, a segmentation process continues to identify a class for each pixel for each viewpoint. For example, the pixels may be classified as serum or plasma portion 212SP, settled blood portion 212SB, gel separator 313 (if present), air 212A, tube 212T, or label 218. Cap 214 may also be classified”), the one or more trained neural network identifying and locating the first object from the one or more objects as imaged in the one or more captured images of the specimen container by processing the pixel data (transmittance image data sets in [0041]) by executing the one or more trained neural networks using the computer (see [00103] and [00115] “Quantifying the liquid region (e.g., the serum or plasma portion 212SP) may further include determining an inner width (Wi) of the specimen container”) (see also [00106] “a boosting classifier such as an adaptive boosting classifier (e.g., AdaBoost, Logit Boost, or the like), any artificial neural network”) (see [0040], which recites “Each of the specimen and reference images may be processed by a computer in order to characterize (classify and and/or quantify) the specimen, specimen container, or both”) (see also [0041], which recites “these multiple specimen and reference images may then be further processed by a computer to generate transmittance image data sets. The transmittance image data sets may be operated upon by a multi-class classifier to yield characterization results”) to identify the one or more particular wavelengths of light and the individual pixel locations (see [00103], which recites “For each transmittance 2D data set for each viewpoint, a segmentation process continues to identify a class for each pixel for each viewpoint. For example, the pixels may be classified as serum or plasma portion 212SP, settled blood portion 212SB, gel separator 313 (if present), air 212A, tube 212T, or label 218. Cap 214 may also be classified”)(see also [00110], which recites “After image capture in 504, segmentation may be undertaken in 51 1 . The segmentation in 51 1 may include an image consolidation and normalization in 512. During image consolidation in 512, the various exposure time images at each wavelength spectra (R, G, and B) and for each viewpoint are reviewed pixel-by-pixel to determine those pixels that have been optimally exposed, as compared to a standard (described above). For each corresponding pixel location of the exposure time images for each viewpoint, the best of any optimally-exposed pixel is selected for each spectra and viewpoint and included in an optimally-exposed 2D image data set’);
displaying on a display (display screen in [00137]) an image from the one or more captured images of the specimen container, the displayed image including the first object (see [00112] “3D model may be generated and constructed in 517 from the consolidated 2D image data sets. The 3D model may be used to ensure a result that is consistent among the various viewpoints (if multiple cameras 440A-440C are used) or the 3D model may be used directly for displaying the various classifications and quantifications”) (see also [00137] “the results of the 2D data sets or 3D model may be displayed or reported in any suitable manner or format, such as by displaying a 3D colored image on a display screen, providing a colored printout, displaying or providing a data sheet of values determined by the imaging”) (see [00113], which recites “According to the method, the liquid region (e.g., the serum or plasma portion 212SP) may be identified in 518. This may involve grouping all the pixels from class - serum or plasma portion 212SP, and then determining a location of the upper interface between liquid (serum or plasma portion 212SP) and air 212A (i.e. , LA) in 519 for the consolidated 2D image data sets”) (see also [00137] “the results of the 2D data sets or 3D model may be displayed or reported in any suitable manner or format, such as by displaying a 3D colored image on a display screen, providing a colored printout, displaying or providing a data sheet of values determined by the imaging”).
Wissmann doesn’t explicitly teach that the class of pixels has one or more particular wavelengths and locations proximate each other that represent the first object.
In the analogous art of providing methods of identifying objects, Ramsay (US US20090324067) teaches a computer programmed (see [0089], which recites “image mapping unit can be a processor, a process, software, firmware and/or hardware that maps image data to predetermined coordinate systems or spaces”) programmed to identify a class of pixels having a particular wavelengths or and locations proximate each other that represent the first object (see [0091], which recites “A color space is a space in which data can be arranged or mapped. One example is a space associated with red, green and blue (RGB). However, it can be associated with any number and types of colors or color representations in any number of dimensions”) (different colors of light correspond to different wavelengths. For example, in the visible spectrum, Red light has the longest wavelength, while violet light has the shortest) (see also [0082], which recites “Hyperspectral data: Hyperspectral data is data that is obtained from a plurality of sensors at a plurality of wavelengths or energies. A single pixel or hyperspectral datum can have hundreds or more values, one for each energy or wavelength. Hyperspectral data can include one pixel, a plurality of pixels, or a segment of an image of pixels, etc., with said content”)(see claim 19, which recites “system for identifying a signature for a feature of interest using a predetermined color space, comprising: an image receiving unit that receives an image containing the feature of interest; an image mapping unit that maps the image to the predetermined color space using a nonlinear transformation to yield a mapped image; and a determining unit that determines the signature to the feature of interest using the mapped image.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann such that the class of pixels has one or more particular wavelengths and locations proximate each other that represent the first object as disclosed by Ramsay with a reasonable expectation of success for the benefit of effectively separating and identifying objects that have almost identical color, density and volume (see [0107] of Ramsay).
Wissmann in view of Ramsay doesn’t teach superimposing one of a first or second graphic on the display over an area on the displayed image of the specimen container identified by the one or more trained neural networks as comprising the first class of pixels representing the first object, wherein the one of the first and second graphics uniquely identifies the area to provide a visual verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks; wherein the first graphic includes a first configuration that matches in size and configuration the identified first class of pixels representing the first object, and the second graphic includes a second configuration of the area that is larger in size than and outlines the configuration of the identified first class of pixels representing the first object.
In the analogous art of identifying objects in an image, Prideaux-Ghee (US20180336729) teaches superimposing a first graphic on a display over an area of an image (see [0059], which recites “the AR system obtains the DT for an object and uses the DT to generate graphics or text to superimpose onto an image of an object”) (AR stands for augmented reality and DT stands for digital twin, see [0031], which recites “a DT is an example of a type of 3D graphical model that is usable with the AR system… DT includes a computer-generated representation of an object comprised of information that models the object (referred to as the physical twin, or PT) or portions thereof. The DT includes data for a 3D graphical model of the object and associates information about the object to information representing the object in the 3D graphical model”) to provide a visual verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks wherein each of the first and second graphics uniquely identifies the location of the first class of pixels representing the first object on the image (see Fig. 6A) (the claim requires displaying a graphic capable of enabling visual verification by a user);
wherein the first graphic includes a first configuration that matches in size and configuration the first class of pixels representing the first object (see [0044], which recites ‘features of the object from the image and the 3D graphical model (from the DT) that match are aligned. That is, the 3D graphical model is oriented in 3D coordinate space so that its features align to identified features of the image”), and the second graphic includes a second configuration of the area that is larger in size than and outlines the configuration of the identified first class of pixels representing the first object (see [0014], which recites “information rendered from the graphical 3D model may be computer graphics that is in outline form, and that at least partly overlays the image”) (see also [0060], which recites “the information may be used to generate AR content from the image and the information about the part. For example, as described, graphics—which may be, e.g., transparent, opaque, outline, or a combination thereof—may be retrieved from the DT for the object instance and displayed over the part selected”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann in view of Ramsay by incorporating superimposing a first graphic on a display over an area of an image as disclosed by Prideaux-Ghee with a reasonable expectation of success such that one of a first or second graphic is superimposed on the display over an area on the displayed image of the specimen container identified by the one or more trained neural networks as comprising the first class of pixels representing the first object, wherein the one of the first and second graphics uniquely identifies the area to provide a visual verification of whether a correct identification of the first class of pixels representing the first object was made by the one or more trained neural networks; wherein the first graphic includes a first configuration that matches in size and configuration the identified first class of pixels representing the first object, and the second graphic includes a second configuration of the area that is larger in size than and outlines the configuration of the identified first class of pixels representing the first object for the benefit of effectively enhancing and explaining an aspect of the object (see [0072] of Prideaux-Ghee, which recites “the computer graphics that form part of the AR content may overlay the same element shown in an image to enhance or explain an aspect of the element”).
As to claim 2, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the first object is at least one of a cap (see [0032]), an air gap (see [00103]), a serum or plasma portion (see [00107]), a settled blood portion (see [00107]), or a gel separator (see [00107]).
As to claim 3, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the identifying and locating the first object is performed, at least in part, by at least one of a segmentation network (see [00137] “correspondence between the segmentation of the various viewpoints may be confirmed with the 3D model”) and a classification network (once generated, each 2D statistical data set is presented to, and operated on, by the multi-class classifier, which may classify the pixels in the image data sets as belonging to one of a plurality of class labels; the multi-class classifier may be a boosting classifier such as an adaptive boosting classifier, including any artificial neural network; for reach transmittance 2D data set for each viewpoint, a segmentation process continues to identify a class for each pixel for each viewpoint; the pixels may be classified as serum or plasma portion 212SP, settled blood portion 212SB, gel separator, 313, air 212A, tube 212T, or label 218, see [0062], [00103], [00105], [00106]) wherein the one or more trained neural networks comprise the at least one of the segmentation network or the classification network (see [00137], which recites “correspondence between the segmentation of the various viewpoints may be confirmed with the 3D model”) and a classification network (once generated, each 2D statistical data set is presented to, and operated on, by the multi-class classifier, which may classify the pixels in the image data sets as belonging to one of a plurality of class labels; the multi-class classifier may be a boosting classifier such as an adaptive boosting classifier, including any artificial neural network”).
As to claim 5, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the displaying comprises displaying an image at least partially representing the specimen container (see abstract and [0037]).
As to claim 6, the method of claim 5 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches overlaying one or more images representing the one or more objects over an image at least partially representing the specimen container (see [0037], [0078], [00118]), wherein locations of the one or more objects are in locations of their respective pixels relative to the image at least partially representing the specimen container (see Abstract and [0037] and [00110]).
As to claim 7, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the identifying and locating the first object from the one or more objects comprises identifying a cap (see Abstract and [0037]).
As to claim 8, the method of claim 7 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches identifying a color of the cap (see Abstract and [0037].
As to claim 9, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the identifying and locating the first object from the one or more objects comprises identifying a label (see [00129]).
As to claim 10, the method of claim 9 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches reading the label (see [0045]).
As to claim 11, the method of claim 9 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches identifying a physical condition of the label (see [0004], [0045], [0051] and [00107]).
As to claim 12, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches that the identifying and locating the first object from the one or more objects comprises identifying a serum (see [00107]) or plasma portion (see [00107]).
As to claim 13, the method of claim 12 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches comprising identifying an interferent in the serum or plasma region (see [0033]).
As to claim 14, the method of claim 12 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann further teaches comprising identifying at least one of hemolysis, icterus, or lipemia in the serum or plasma region (see [0033]).
As to Claim 23, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Prideaux-Ghee (US20180336729) teaches that the first graphic comprises cross hatching, shading, coloring or intensities different than other pixels in the displayed image (see Fig. 6A).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wissmann (WO2017132172A1, cited in the 03/19/2021 Information Disclosure Statement) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729) as applied to claim 1 and further in view of Segalovitz (US20170357851).
As to claim 4, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067).
Wissmann further teaches comprising segmenting the one or more images into a plurality of pixel classes (see [00103]) and identifying and locating an object in the one or more images using the one or more trained neural networks executing on the computer to process the pixel data to identify a second class of pixels (see [0011] constituting one or more of pixel classes relative to the specimen container multi-class classifier to classify the various components of at least the specimen and/or the specimen container; once generated, each 2D statistical data set is presented to, and operated on, by the multi-class classifier, which may classify the pixels in the image data sets as belonging to one of a plurality of class labels; the multi-class classifier may be a boosting classifier such as an adaptive boosting classifier, including any artificial neural network; for reach transmittance 2D data set for each viewpoint, a segmentation process continues to identify a class for each pixel for each viewpoint; the pixels may be classified as serum or plasma portion 212SP, settled blood portion 212SB, gel separator, 313, air 212A, tube 212T, or label 218; (see [0036], [0062], [0103], [00105-00106]).
In addition, in the analogous art of providing methods of identifying objects, Ramsay (US US20090324067) teaches a computer programmed (see [0089], which recites “image mapping unit can be a processor, a process, software, firmware and/or hardware that maps image data to predetermined coordinate systems or spaces”) programmed to identify a class of pixels having a particular wavelengths or and locations proximate each other that represent the first object (see [0091], which recites “A color space is a space in which data can be arranged or mapped. One example is a space associated with red, green and blue (RGB). However, it can be associated with any number and types of colors or color representations in any number of dimensions”) (see also [0082], which recites “Hyperspectral data: Hyperspectral data is data that is obtained from a plurality of sensors at a plurality of wavelengths or energies. A single pixel or hyperspectral datum can have hundreds or more values, one for each energy or wavelength. Hyperspectral data can include one pixel, a plurality of pixels, or a segment of an image of pixels, etc., with said content”)(see claim 19, which recites “system for identifying a signature for a feature of interest using a predetermined color space, comprising: an image receiving unit that receives an image containing the feature of interest; an image mapping unit that maps the image to the predetermined color space using a nonlinear transformation to yield a mapped image; and a determining unit that determines the signature to the feature of interest using the mapped image.”)
Wissmann in view of Ramsay doesn’t teach identifying and locating a second object in the captured image.
In the analogous art of providing methods of identifying objects in an image, Segalovitz (US20170357851) teaches identifying and locating a second object in the captured image (see claim 1, which recites “A method for detecting, by a portable or a hand-held device, one or more rectangular-shaped object regions from a background in a captured image, the method by the device comprising: obtaining the captured image by a digital camera; analyzing the captured image using a deep convolutional neural network for detecting the rectangular-shaped object regions; cropping or extracting from the captured image each of the detected regions into a respective file; and transmitting one or more of the files over a wireless network using a wireless transmitter, wherein the neural network is further trained to recognize or classify the rectangular-shaped object regions in the captured image”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann in view of Ramsay in view of Prideaux-Ghee by incorporating identifying and locating a second object in the one or more images as disclosed by Segalovits such that the second object is identified and located using the one or more trained neural networks executing on the computer to process the pixel data to identity a class of pixels having one or more particular wavelengths or intensities of light and location proximate each other that represent the second object with a reasonable expectation of success for the benefit of providing for improved image based recognition.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Wissmann (WO2017132172A1, cited in the 03/19/2021 Information Disclosure Statement) in view of Ramsay (US US20090324067) in view of Prideaux-Ghee (US20180336729) as applied to claim 1 and further in view of Luo (JP 2003-157438 A).
As to claim 15, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann in view of Ramsay doesn’t teach that the identifying and locating the first object comprises assigning a confidence gradient to one or more pixels identified by the one or more trained neural networks to identify as representing the first object, and wherein the superimposing further comprises displaying, on the image of the specimen container, one or more images indicating the confidence gradient of the one or more pixels identified by the one or more trained neural networks as representing the first object.
In the analogous art of providing method of identifying objects in images, Luo (JP 2003-157438 A) teaches assigning a confidence gradient to one or more pixels to identify as representing the first object, and delineating by displaying, on the image, one or more images indicating the confidence gradient of the one or more pixels as representing the first object (see the abstract, which recites “a method for detecting a main material … a method for detecting a material region in a digital color image having pixels of (red, green, blue) values, the method belonging to the material region based on color and texture characteristics. Assigning a confidence value to each pixel as a threshold, creating spatially adjacent material region candidates by thresholding the confidence value, and determining one or more characteristics of the material to determine the probability that the region belongs to the material. Analyzing the spatially adjacent regions based on the characteristics of the above, and generating a map of the detected material regions and the associated probabilities that the regions belong to the materials”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann in view of Ramsay by incorporating a confidence gradient to one or more pixels to identify as representing the first object, and delineating by displaying, on the image, one or more images indicating the confidence gradient of the one or more pixels as representing the first object as disclosed by Luo such that a confidence gradient to one or more pixels identified by the one or more trained neural networks to identify as representing the first object, and wherein the delineating further comprises displaying, on the image of the specimen container, one or more images indicating the confidence gradient of the one or more pixels identified by the one or more trained neural networks as representing the first object with a reasonable expectation of success such that the identifying and locating the first object comprises assigning a confidence gradient to one or more pixels identified by the one or more trained neural networks to identify as representing the first object, and wherein the delineating further comprises displaying, on the image of the specimen container, one or more images indicating the confidence gradient of the one or more pixels identified by the one or more trained neural networks as representing the first object for the benefit of providing for improved image based recognition.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Wissmann (WO2017132172A1, cited in the 03/19/2021 Information Disclosure Statement) in view of Ramsay (US US20090324067) in view of Prideaux-Ghee (US20180336729) as applied to claim 1 and further in view of Levenson (US20060245631).
As to claim 16, the method of claim 1 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Prideaux-Ghee (US20180336729).
Wissmann in view of Ramsay in view of Prideaux-Ghee doesn’t teaches that the second graphic comprises an activation map wherein the activation map forms the configuration of the area and overlays the first class of pixels representing the first object on the displayed image.
In the analogous art of providing methods of identifying objects on an image, Levenson (US20060245631) teaches generating an activation map and displaying, on an image, the activation map wherein the activation map forms the configuration of the area and overlays the first class of pixels representing first object on the displayed image (see [0143], which recites “generating a classification map for the sample based on the final classification of step 722, or more generally, on the provisional pixel classification histogram data generated in the earlier steps. The classification map can include, for example, an image of the sample with classified regions highlighted in order to enhance contrast”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann in view of Ramsay by incorporating generating an activation map and displaying, on an image, the activation map wherein the activation map forms the configuration of the area and overlays the first object on the displayed image as disclosed by Levenson such that generating an activation map and displaying, on an image of the specimen container, the activation map wherein the activation map forms the configuration of the area and overlays the first object on the displayed image with a reasonable expectation of success such that the second type of delineation comprises generating an activation map, and displaying, on an image of the specimen container, the activation map, wherein the activation map forms the configuration of the area and overlays the first object in the displayed image for the benefit of providing for improved image based recognition.
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Wissmann (WO2017132172A1, cited in the 03/19/2021 Information Disclosure Statement) in view of Ramsay (US US20090324067) in view of Levenson (US20060245631) as applied to claim 16 and further in view of McManus (US20110001809).
As to Claim 22, the method of claim 16 is disclosed by Wissmann (WO2017132172A1) in view of Ramsay (US20090324067) in view of Levenson (US20060245631).
Wissmann in view of Ramsay in view of Levenson doesn’t teach that the activation map has an outline displayed as dashed lines.
In the analogous art providing methods of identifying objects in images, McManus (US20110001809) teaches an outline of an object displayed as dashed lines (see [0018], which recites “IG. 3A further illustrates perimeter edges 315 and 325 (shown with dashed lines) of a selected area that corresponds to a first object of interest 310 and a second object of interest 320. According to methods of the present invention, the display of VL image 203 is viewed in order to identify an outline 312, 322 of each of first and second objects 310, 320, and, then, one or more of interactive elements 340, 345, 350 may be employed to select the area defined by perimeter edges 315, 325”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method as disclosed by Wissmann in view of Ramsay in view of Levenson by incorporating an outline displayed as dashed lines as disclosed by McManus such that the activation map has an outline displayed as dashed lines with a reasonable expectation of success for the benefit of providing for improved image based recognition.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN BORTOLI whose telephone number is (571)270-3179. The examiner can normally be reached 10 A.M. to 7 P.M. EST Monday through Thursday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lyle Alexander can be reached on (571)272-1254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN BORTOLI/Examiner, Art Unit 1797
/JENNIFER WECKER/Primary Examiner, Art Unit 1797