DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The IDS filed 7/28/2025, 10/21/2025, 9/16/2021 and 1/23/2026 have been considered by the Examiner.
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d) to EP 201999208 filed 10/2/2020.
Claim Rejections - 35 USC § 101
The instant rejection is maintained for reasons of record from the Office Action of 10/01/2025 and modified in view of Applicant’s amendments filed 1/23/2026.
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Step 1: Process, Machine, Manufacture or Composition
Claims 1-12 and 16-19 are drawn to method, so a process.
Claim 15 is drawn to a device comprising a camera and processing unit, so a machine.
Claims 13-14 are drawn to a computer program per se and computer readable storage medium which is not a statutory class of invention. A transitory signal such as a carrier wave, or a program, per se, is not a statutory category of invention, as set forth in In re Nuijten. A review of the specification does not show a definition of computer readable media that excludes an embodiment that is information in a signal. As such, an embodiment of the claims read on non-statutory subject matter (In re Nuijten 84 USPQ2d 1495 (2007)). The applicants may overcome the rejection by 1) amendment of the claims to be limited to physical forms of computer readable storage media described in the specification or 2) by amending the claimed subject matter to be limited to “non-transitory”, see the notice regarding Computer Readable Media (1351 OG 212 (23 February 2010)).
Step 2A Prong One: Identification of an Abstract Idea
The claim(s) recite(s)
1. categorizing the state of the cavity into at least one category by applying at least one trained model on the image by using at least one processing unit.
This step reads on comparing feature information gathered from an image to a trained model which includes previously gathered image features. The step can be performed by the human mind or with math. The step is therefore an abstract idea.
2.wherein the trained model is being trained on image data of the transport interface, wherein the image data comprises a plurality of images of the transport interface with cavities in different states.
This step reads on organizing gathered image data into a model. The step can be performed by the human mind or with math and is therefore an abstract idea.
3. providing the determined category of at least one cavity via at least one communication interface.
The step reads on classifying the image data with information in the trained model to determine a category. The step can be performed by the human mind or with math and is therefore an abstract idea.
Dependent claims 4-12 are further drawn to processes which is math or can be accomplished by the human mind and are therefore abstract ideas. Claim 6 is drawn to scaling or shaping the image, background subtraction or smoothing. Claim 7 is drawn to a Gaussian filter, Savitzky-Golay smoothing and filtering. These processes are mathematical manipulation of data typically performed on a generic computer. Claim 10 is drawn to using a transfer learning technique which is recited at a high level of generality and therefore reads on mental processes of data analysis and math.
Step 2A Prong Two: Consideration of Practical Application
The claims do not recite any additional elements that integrate the judicial exception into a practical application. The claims result in a step of providing information which is the category of a cavity, which is an abstract idea.
New claim 16, step (iv) recites operating the transport interface based on the determined category. However, this limitation is recited as just “apply it,” as described in MPEP 2106.05(f). The claim does not recite how the determined category is applied with or by use of the particular machine which is the transport interface, i.e. how the transport interface is operated in response to the category is not recited.
This judicial exception is not integrated into a practical application because the claims do not meet any of the following criteria:
An additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than
a drafting effort designed to monopolize the exception.
Step 2B: Consideration of Additional Elements and Significantly More
The claimed method also recites "additional elements" that are not limitations drawn to an abstract idea. The recited additional elements are drawn to:
1. capturing at least one image of at least part of the transport interface using a camera, the at least one image containing the at least one cavity, wherein the transport interface is a device forming a physical boundary configured for transferring or providing transfer of the sample tubes between the transport module and the connected device, as in claim 1.
The instant limitation is dawn to image capturing for further analysis and classification of the image. The step is dawn to data gathering and is extra solution activity as described in MPEP 2106.05(g). The limitation describing the transport interface as a device configured for transferring of the sample tubes merely describes the data which may be in the image. This limitation is tangential to the recited abstract idea because it does not meaningfully limiting the recited process of categorizing with a trained model. An image classification model can classify any physical object and therefore the fact that the object in the image is a test tube transport device does not add significantly more to the recited process of categorizing.
2. a convolutional neural network model based on a YOLOV3-tiny deep learning architecture, as in claims 2-3.
3. a computer program product, computer readable medium and device comprising a camper and processing unit, as in claims 13, 14, and 15.
4. transferring the sample tube between the transport module and connected device using a rotating carousel having several cavities having an empty cavity, a holder, closed sample tube or open sample tube, as in claims 17 and 19.
The instant limitation does not add significantly more to the recited abstract idea because the process of transporting and structure of the carousel for holding test tubes is tangential to the process of classifying images of the open or closed state of test tubes. The transport method and device does not meaningfully limit the abstract idea steps performed by the CNN.
Furthermore, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the CNN and CNN based on YOLOV3-tiny deep learning architecture is interpreted as a generic computer applied to performing the abstract idea of categorizing the state of an object from image data. The application of deep learning YOLOV3 to image analysis is routine, conventional and well understood as evidenced by at least Wang et al. (Proceedings of the 28th Chinese Control Conference (2019) pgs. 8750-8755) and Pang et al. (PLOS One (2019) pgs. 1-11). Both Wang et al. and Pang et al. perform object detection and classification with deep learning YOLOV3.
Other elements of the method include the camera which is routine for image data collection and generic computer and readable medium which is a recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea recited in the instantly presented claims into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant's arguments filed 12/31/2025 have been fully considered but they are not persuasive.
In response to the 35 USC 101 rejection, Applicants argue that claim 1 has been amended in step (i) to recite “the at least one image containing the at least one cavity, wherein the transport interface is a device forming a physical boundary configured for transferring or providing transfer of the sample tubes between the transport module and the connected device.”
In response, this limitation describes what is in the image to be analyzed, i.e. “the at least one cavity,” and a further limitation describing the transport interface as forming a physical boundary for transporting tubes. The limitation describing the transport interface is tangential and does not meaningfully limit the recited analysis step of categorizing the state of a cavity. Furthermore, the limitation describing the transport interface merely describes an object parts of which may be in the image, that is, it describes the data that has been collected by the camera. The categorization step does not consider the structure of the transport interface, nor is the transport interface itself a recited additional element. The limitation describing the transport device merely describes the information that is in the image data.
With respect to new claims 16-19, the claims have been amended to recite “(iv) operating the transport interface based on the determined category.” The claims attempt to recite a practical application of the abstract idea in conjunction with, a particular machine or manufacture that is integral to the claim. However, the claim recites the additional element of claim 16, step (iv) as a just “apply it” limitation, described in MPEP 2106.05(f). Step (iv) is not particular with regard to how the transport interface is operated based on the determined category. The claim is not particular with how the determined category is integrated with operating the transport interface, such as moving the transport device in response to the determined category.
It is also noted that claims 13-14 are still drawn to a signal or program per se. Applicants should recite a non-transitory computer readable medium and not transitory instructions.
Claim Rejections - 35 USC § 103
The instant rejection is maintained for reasons of record from the Office Action of 10/01/2025 and modified in view of Applicant’s amendments filed 1/23/2026.
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a).
Claims 1-2, 4-16 and 18 are rejected under 35 U.S.C. 103(a) as being unpatentable over Change et al. (US 2020/0167591; IDS 9/16/2021).
Chang et al. teach capturing the tops of one or more sample tubes and processing the one or more images of tops by applying the image to a convolutional neural network (par. 0006); images are captured with a camera (par. 0029)(i.e. capturing at least one image of at least a part of the transport interface by using at least one camera), as in claim 1, step (i) and claims 15-16, step (i).
Chang et al teach (par. 0004) that storage and transport between automated analyzers and storage locations is performed using trays; a tray may be an array of several patient samples stored in test tubes, vials, or the like (i.e. cavities); trays (i.e. physical boundary) may be stackable (i.e. the transport interface is a device forming a physical boundary configured for transferring or providing transfer of the sample tubes between the transport module and the connected device), as in claim 1.
Chang et al. teach generating an edge map from the processed image and determining sample tube categories based on the generated edge map (par. 0008); tube geometry is characterized and the tube is classified (par. 0034); sample tube categories include tube height, diameter and shortest or longest tube, tube type or off set position (par. 0053(i.e. categorizing the state of the cavity into at least one category by applying at least one trained model on the image using a processing unit; providing a the determined category of a cavity), as in claim 1, step (ii) and claims 15-16, steps (ii)-(iii).
Chang et al. teach (par. 0008) determining sample tube categories or characteristics based on the generated edge map, and controlling a robot based on the determined sample tube categories or characteristics (i.e. operating the transport interface based on the determined category), as in claim 16, step (iv).
Chang et al. teach determining categories or the sample tube (par. 0053) and storing the categories (i.e. which suggests providing the determined categories via a communication interface), as in claim 1, step (iii) and claim 15.
Chang et al. do not specifically teach a model trained on image data of the transport interface wherein the image data is of a plurality of images of the transport interface with cavities in different states, as in claims 1 and 9.
However, Chang et al. teach inputting training images of tubes (Figure 11); Chang et al. teach differentiating edges coming from the tube tops from edges coming from other objects in the image (par. 0025); Chang et al. teach images of different tubes (Figure 5A-5C) and determining different tube characteristics (par. 0053), which suggests that the model is trained to detect different types of tubes (i.e. a model trained on image data of a plurality of images of the transport interface with cavities in different states).
It would have been obvious to one of ordinary skill in the art at the time the invention was made to train a classification model which includes a convolutional neural network as taught by Chang et al. using images of sample tubes in different states (such as different heights, presence or absence), as also taught by Chang et al. The combination of elements taught by Chang et al. is the application of the known technique of training a classifier with images applied to a convolutional neural network based classifier. Such is a combination of known prior art elements that would yield a predicable result of arriving at a model trained on image data of a plurality of images of different sample tubes in different states on tube carriers.
Regarding dependent claims 2, 4-8, 10-14 and 18
Chang et al. teach a convolution neural network as part of the tube categorization process (par. 0034 and Figure 14), as in claim 2.
Chang et al. teach determining the shortest and longest tube and the absence of tube in slot (par. 0053)(i.e. the trained model is configured for classifying different image regions into different categories), as in claim 4.
Chang et al. teach one or more cameras or other suitable image capture devices which suggests a video camera and wherein a video camera would be an obvious substitution for a single photo camera and would generate a video stream, as in claim 5.
Chang et al. teach preprocessing the image by shaping, smoothing and background subtraction (Figure 5A-5C and 6A-6B) and applying a filter (par. 0037) and suggests a bilateral filter (Figures 6A-6C), as in claims 6-7.
Chang et al. teach categorizing presence and absence of tubes in a holder as well as tube type such as tub with a cap (par. 0053), as in claim 8.
Chang et al. teach multiple convolution and pooling layers wherein a second convolution layer further detects edge circles from the first convolution layer (par. 0035-0039 and Figure 4)(i.e. transfer learning), as in claim 10.
Chang et al. teach manually annotated circle edges as inputs to the categorization model (par. 0047)(i.e. retraining based on custom prepared dataset), as in claim 11.
Chang et al. teach controlling robot to move a sample tube based on the generated map from the processed image (par. 0003 and 0054)(i.e. determining if it is possible to start operation with a connected device based on determined category), as in claim 12.
Chang et al. teach computer readable storage medium (par. 0007) and programs (par. 0032), as in claims 13-14.
Chang et al teach (par. 0004) that storage and transport between automated analyzers and storage locations is performed using trays; a tray may be an array of several patient samples stored in test tubes, vials, or the like (i.e. cavities); trays (i.e. physical boundary) may be stackable (i.e. the transport interface is a device forming a physical boundary configured for transferring or providing transfer of the sample tubes between the transport module and the connected device), as in claim 18.
Claim 3 is rejected under 35 U.S.C. 103(a) as being unpatentable over Change et al. (US 2020/0167591) as applied to claims 1-2, 4-16 and 18 above and further in view of Gong et al.
Chang et al. teach test tube detection, type characterization and classification with convolutional neural networks as for claims 1-2, 4-16 and 18.
Chang et al. do not teach using YOLOV3-tiny deep learning architecture, as in claim 3.
Gong et al. however teaches object detection based on YOLOV3-tiny with a vehicle dataset. Gong et al. teach that YOLOV3-tiny is an improvement over convolutional neural networks (page 3240, col. 2).
It would have been obvious to one of ordinary skill in the art at the time the invention was made to have substituted the convolutional neural network and classification system that classifies test tubes as taught by Chang et al. with the YOLOV3-tiny as taught by Gong et al. Gong et al. teach that the YOLOV3-tiny improves upon CNN object detection (page 3241, col. 1, par. 1). One of skill in the art would have had a reasonable expectation of success at detecting the tubes of Chang et al. with YOLOV3-tiny taught by Gong et al. because both Chang et al. and Gong et al. classify images using object detection learning method. Substituting the CNN and classifier of Chang et al. with the YOLOV3-tiny of Gong et al. would be a simple substitution of one known equivalent element for another to obtain predictable results.
The following rejection is necessitated by Applicant’s amendments filed 1/23/2026.
Claims 17 and 19 are rejected under 35 U.S.C. 103(a) as being unpatentable over Change et al. (US 2020/0167591) as applied to claims 1-2, 4-16 and 18 above and further in view of Tran et al. (US 2021/0356481).
Chang et al. teach test tube detection, type characterization and classification with convolutional neural networks as for claims 1-2, 4-16 and 18.
Chang et al. teach (par. 0004) storage and transport of test tubes between automated analyzers and storage locations using trays.
Chang et al. teach (par. 0004) transferring trays and tubes with a robotic arm and that a robot including an end effector may remove the sample tubes from the tray and transport them to a carrier or to an automated analyzer.
Chang et al. do not teach transporting a tube using a rotating carousel having several cavities.
Tran et al. teach (par. 0045) placing test tubes into a rack, transporting tubes and inserting tubes into a circular carousel that rotates to make the sample available. Tran et al. further teach (par. 0046) automatic transporting tubes between stations such as analyzer systems and pooler systems with the use of robotics.
It would have been obvious to one of ordinary skill in the art at the time the invention was made to combine the classification process of Chang et al. which includes imaging test tubes in trays with the test tube storage and transport system of Tran et al. for carrying and moving test tubes in a rotatable carousel and use of a robot. Tran et al. provide motivation by teaching that such a system allows for minimal human assistance (par. 0045). One of ordinary skill would have an expectation of success in combining Chang et al. with Tran et al. because both teach automated test tube analyzer systems.
Response to Arguments
Applicant's arguments filed 12/31/2025 have been fully considered but they are not persuasive.
Applicants argue (Remarks, page 7, par. 4) that a particular embodiment related to determining the state of a cavity of a rotating transport interface. Applicants argue that as recited in claim 1, a video stream is used for coping with the movement.
In response, the rotating carousel is recited in claims 17 and 18 which depend from newly introduced claim 16. Furthermore, claim 1 does not recite capturing a video stream nor does it recite the rotating carousel.
Applicants argue that Chang refers to statistic situations and that one of skill would know that such models are developed and optimized for static photos and would not apply a model (the model in Chang) for image recognition to a video stream of a rotating laboratory interface.
In response, the claims do not recite the argued embodiments. The claims are not directed to the image capture and analysis of a live video stream of a rotating apparatus.
Suggestion for Examiner Interview
Applicant is advised to contact the Examiner at the below listed contact information in order to set up an Interview to discuss the rejections maintained and newly set forth herein. It is noted that an Interview with the Examiner serves to move prosecution forward and clarify remaining issues in the Office Action.
E-mail communication Authorization
Per updated USPTO Internet usage policies, Applicant and/or applicant’s representative is encouraged to authorize the USPTO examiner to discuss any subject matter concerning the above application via Internet e-mail communications. See MPEP 502.03. To approve such communications, Applicant must provide written authorization for e-mail communication by submitting the following statement via EFS Web (using PTO/SB/439) or Central Fax (571-273-8300):
Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.
Written authorizations submitted to the Examiner via e-mail are NOT proper. Written authorizations must be submitted via EFS-Web (using PTO/SB/439) or Central Fax (571-273-8300). A paper copy of e-mail correspondence will be placed in the patent application when appropriate. E-mails from the USPTO are for the sole use of the intended recipient, and may contain information subject to the confidentiality requirement set forth in 35 USC § 122. See also MPEP 502.03.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Anna Skibinsky whose telephone number is (571) 272-4373. The examiner can normally be reached on 12 pm - 8:30 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Ram Shukla can be reached on (571) 272-7035. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Anna Skibinsky/
Primary Examiner, AU 1635