Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claims 19-20 as a whole define(s) a program, and this is not a “process, machine, manufacture, or composition of matter.” Those four categories define the explicit scope and reach of subject matter patentable under 35 U.S.C. § 101; thus, such a signal cannot be patentable subject matter.” (In re Nuijten, 84 USPQ2d 1495 (Fed. Cir. 2007)). It is recommended that the claim recite a “non-transitory computer readable medium . . ..”
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-8, 10-17 and 19-20 are rejected under 35 U.S.C. 102a1 as being anticipated by Kortylewski (Compositional Convolutional Neural Networks: A Deep Architecture with Innate Robustness to Partial Occlusion, from the IDS filed on 8/13/24).
Regarding claim 1, Kortylewski teaches a method comprising: obtaining, by one or more computers, a first feature set that represents a first image that depicts a first type of object, wherein the first feature set preserves the spatial features of the first image (section 3.1, “spatial information from the image is preserved”);
providing, by one or more computers, the obtained first feature set as an input to a first machine learning model that has been trained to process a feature set that preserves spatial features of an image depicting an object of the same object type as the first type of object and generate output data for each class of a plurality of different classes that each correspond to a particular spatial orientation of an object of the same object type as the first type of object, where the first output data for each class represents a likelihood that an image represented by the first feature map depicts an object in a particular spatial orientation that corresponds to the class (section 3.1, mixture of compositional models outputs data for each class of plurality of different classes that each spatial orientation. See also figure 2 which shows the overall architecture);
providing, by one or more computers, the obtained first feature set as an input to a second machine learning model that has been trained to process a feature map corresponding to an image of any object type and generate output data that includes an occlusion likelihood, wherein the occlusion likelihood indicates a likelihood that an object depicted in an image represented by a feature set processed by the second machine learning model is at least partially occluded (section 3.1, occlusion modeling, this is a second model);
processing, by one or more computers, the obtained first feature set through the first machine learning model to generate first output data (section 3.1, mixture likelihood maps);
processing, by one or more computers, the obtained first feature set through the second machine learning model to generate second output data (section 3.1, occlusion likelihood is computed from the vMF);
determining, by one or more computers and based on the first output data and the second output data, a score that indicates a likelihood that the first image depicts an object of the first type that is at least partially occluded (section 3.1, equation 11, explains combining mixture and occlusion likelihoods); and
based on a determination that the determined score satisfies a predetermined threshold, generating, by one or more computers, third output data that includes an instruction indicating that an object of the first type that is at least partially occluded has been detected (section 3.1, class score).
Regarding claim 2, see section 3.1, a fully generative compositional model that make a second feature which is the mixture likelihood maps.
Regarding claim 3, see section 3.1, feed forward structure, with class score. Section 4 has training.
Regarding claim 4, see section 3.1 which teaches an inference as feed forward neural network and computing the likelihood tensor from the feature vectors. Section 3.1 also has an occlusion model with object class scores.
Regarding claim 5, see Figure 2, final class score.
Regarding claim 6, see section 3.1, Z(m,p) is the position. This is shown in figure 2 and section 3.1.
Regarding claim 7, the class score in section 3.1 and figure 2 is the class score.
Regarding claim 8, see figure 2 and section 3.1, binary occlusion maps (Z(m,y)). This is based on the score.
Regarding claim 10-17, see the rejection of claim 1-8.
Regarding claims 19-20, see the rejection of claims 1-3.
Allowable Subject Matter
Claim 9 and 18 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The specific steps of “converting the object occlusion map into a binary occlusion map further comprises: comparing a positional occlusion likelihood score, representing the likelihood that a particular position is occluded, against a predetermined threshold; upon determine that a positional occlusion likelihood score exceeds the predetermined threshold, marking the position in the binary occlusion map as a value that represents the position is occluded; and upon determine that a positional occlusion likelihood score does not exceed the predetermined threshold, marking the position in the binary occlusion map as a value that represents the position is not occluded” are not found in the prior art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HADI AKHAVANNIK whose telephone number is (571)272-8622. The examiner can normally be reached 9 AM - 5 PM Monday to Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HADI AKHAVANNIK/Primary Examiner, Art Unit 2676