DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 17/042,063, filed on 25 September 2020.
Allowable Subject Matter
Claims 1-3 are currently subject to non-statutory double patent rejections, but are otherwise not subject to any prior art rejections under either 35 U.S.C. § 102 or 35 U.S.C. § 103. Assuming that the foregoing shortcomings of these claims were rectified by the timely filing of a terminal disclaimer, these claims would be allowable.
The following is a statement of reasons for the indication of allowable subject matter:
With regards to claims 1-3, these claims recite the same patentable features as were found allowable in parent application no. 17/659,574, which issued as United States Patent No. 11,961,320 on 16 April 2024. The present claims are allowable for the same reasons as were provided in the parent application.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,961,320. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 1
U.S. Patent No. 11,961,320
Claim 1
A method for training a machine learning model to identify a subject in a group of subjects with each subject having at least one machine readable identifier providing a subject ID, said method comprising:
A method for training a machine learning model to identify a subject in a group of subjects with each subject having at least one machine readable identifier providing a subject ID, said method comprising:
- providing a computer vision system with an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier;
providing a computer vision system with an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier;
- defining said machine learning model;
defining said machine learning model in said computer vision system;
- capturing a first image using said image capturing system;
capturing a first image using said image capturing system, said first image showing a first subset of subjects of said group of subjects;
- reading said subject IDs of subjects present in said first image and linking those subject IDs with said first image, providing a first annotated image;
reading said subject ID of each subject in said first subset, and linking each subject ID of said first subset with said first image, providing a first annotated image;
- capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects, and
capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects;
- subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model.
reading said subject ID of each subject …, and subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model.
Claim 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,961,320. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 2
U.S. Patent No. 11,961,320
Claim 1
A method for identifying a subject in a group of subjects, comprising:
A method for training a machine learning model to identify a subject in a group of subjects…
- providing a computer vision system with an image capturing system comprising at least one image capturing device;
providing a computer vision system with an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier;
- providing a trained machine learning network, trained using a method comprising:
A method for training a machine learning model …, said method comprising:
- providing a computer vision system with an image capturing system comprising at least
providing a computer vision system with an image capturing system comprising at least…
- one image capturing device, and a reader system comprising at least one reader for
… one image capturing device, and a reader system comprising at least one reader for…
- reading said at least one machine readable identifier;
… reading said at least one machine readable identifier;
- defining said machine learning model;
defining said machine learning model in said computer vision system
- capturing a first image using said image capturing system;
capturing a first image using said image capturing system, said first image showing a first subset of subjects of said group of subjects
- reading said subject IDs of subjects present in said first image and linking those subject IDs with said first image, providing a first annotated image;
reading said subject ID of each subject in said first subset, and linking each subject ID of said first subset with said first image, providing a first annotated image
- capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects, and
capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects;
- subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model;
reading said subject ID of each subject …, and subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model.
- capturing an image using said image capturing system; and
A method for training a machine learning model to identify a subject in a group of subjects …, said method comprising: providing a computer vision system with an image capturing system comprising at least one image capturing device…
- subjecting said captured image to said trained machine learning network for identifying said subject in said group of subjects.
A method for training a machine learning model to identify a subject in a group of subjects …, said method comprising: providing a computer vision system with an image capturing system comprising at least one image capturing device…
Claim 1 of U.S. Patent No. 11,961,320 discloses training a machine learning model to identify a subject in a group of subjects using a computer vision system with an image capturing system comprising at least one image capturing device. Implicitly, a computer vision system is trained with the intent that it be used. "[I]n considering the disclosure of a reference, it is proper to take into account not only specific teachings of the reference but also the inferences which one skilled in the art would reasonably be expected to draw therefrom." In re Preda, 401 F.2d 825, 826, 159 USPQ 342, 344 (CCPA 1968); see, also, In re Lamberti, 545 F.2d 747, 750, 192 USPQ 278, 280 (CCPA 1976). In the instant matter, one of ordinary skill in the art would infer that use of a computer vision system comprising at least one image capturing device would comprise capturing an image using said image capturing system. Furthermore, one of ordinary skill in the art would infer that use of the computer vision system trained “to identify a subject in a group of subjects” would comprise subjecting said captured image to said trained machine learning network for identifying said subject in said group of subjects.
(Continued on next page)
Claim 3 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim of U.S. Patent No. 11,961,320. Although the claims at issue are not identical, they are not patentably distinct from each other as shown in the following table:
Present Application
Claim 3
U.S. Patent No. 11,961,320
Claim
A system for identifying a subject in a group of subjects with each subject having at least one machine readable identifier providing a subject ID, said system comprising:
A system for identifying a subject in a group of subjects with each subject having at least one machine readable identifier providing a subject ID, said system comprising:
- a computer vision system comprising an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier;
a computer vision system comprising an image capturing system comprising at least one image capturing device, and a reader system comprising at least one reader for reading said at least one machine readable identifier;
- a machine learning model defined in said computer vision system;
a machine learning model defined in said computer vision system;
said computer vision system in operation:
said computer vision systemin operation:
- capturing a first image using said image capturing system;
capturing a first image using said image capturing system, said first image showing a first subset of subjects of said group of subjects;
- reading said subject IDs of subjects in said first image and linking those subject IDs with said first image, providing a first annotated image;
reading said subject ID of each subject in said first subset, and linking each subject ID of said first subset with said first image, providing a first annotated image;
- capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects, and
capturing at least one further image using said image capturing system, said further image showing a further subset of subjects of said group of subjects;
- subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model.
reading said subject ID of each subject …, and subjecting said first annotated image and said at least one further annotated image to said machine learning model for training said machine learning model.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID F DUNPHY whose telephone number is (571)270-1230. The examiner can normally be reached 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on 5712727332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID F DUNPHY/Primary Examiner, Art Unit 2673