DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
This Final Office Action in response to the communication received on January 22, 2026.
Claims 1, 2, 6, and 9-10 have been amended.
Claim 7 has been cancelled.
Claims 1-6 and 8-10 are pending.
The effective filing date of the claimed invention is June 18, 2020.
Response to Amendment
Amendments to Claims 1, 2, 6, and 9-10 are acknowledged.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 and 8-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed a judicial exception (i.e., an abstract idea) without significantly more.
Step 1 – Statutory Categories
As indicated in the preamble of the claim, the examiner finds the claim is directed to a process, machine, manufacture, or composition of matter.(Claim 10 is a process and Claims 1-6 and 8-9 are machines). Accordingly, step 1 is satisfied.
Step 2A – Prong 1: was there a Judicial Exception Recited
Claim 1 (and similarly Claims 9 and 10) recites the following abstract concepts that are found to include “abstract idea.” Any additional elements will be analyzed under Step 2A-Prong 2 and Step 2B:
an imager configured to generate an image;
a controller configured to estimate information indicating an object contained in the image, a category and an orientation of the object based on the image (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential)), and
a display device configured to display the image and the information indicating the object (See MPEP 2106.04(a)(2)(II), certain methods of organizing human activity, The sub-grouping “managing personal behavior or relationships or interactions between people” include social activities, teaching, and following rules or instructions. Another example of a claim reciting social activities is Interval Licensing LLC, v. AOL, Inc., 896 F.3d 1335, 127 USPQ2d 1553 (Fed. Cir. 2018). The social activity at issue was the social activity of “’providing information to a person without interfering with the person’s primary activity.’” 896 F.3d at 1344, 127 USPQ2d 1553 (citing Interval Licensing LLC v. AOL, Inc., 193 F. Supp.3d 1184, 1188 (W.D. 2014)). The patentee claimed an attention manager for acquiring content from an information source, controlling the timing of the display of acquired content, displaying the content, and acquiring an updated version of the previously-acquired content when the information source updates its content. 896 F.3d at 1339-40, 127 USPQ2d at 1555. The Federal Circuit concluded that “[s]tanding alone, the act of providing someone an additional set of information without disrupting the ongoing provision of an initial set of information is an abstract idea,” observing that the district court “pointed to the nontechnical human activity of passing a note to a person who is in the middle of a meeting or conversation as further illustrating the basic, longstanding practice that is the focus of the [patent ineligible] claimed invention.” 896 F.3d at 1344-45, 127 USPQ2d at 1559.),
wherein the controller includes a multilayer-structure neural network that is configured to function as:
a feature point estimator configured to estimate a feature point of an image generated by the imager based on the image (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential));
a boundary estimator configured to estimate a bounding frame of an object contained in the image based on a feature point estimated by the feature point estimator (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential));
a category estimator configured to estimate a category of an object inside the bounding frame based on a feature point estimated by the feature point estimator (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential));
a state estimator configured to estimate a state of an object inside the bounding frame based on a feature point estimated by the feature point estimator (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential)); and
an object estimator configured to estimate an object inside the bounding frame based on a feature point estimated by the feature point estimator (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential)),
the feature point estimator, the boundary estimator, the category estimator, the state estimator, and the object estimator being built using supervised learning, and the feature point estimator being built by training using images labeled with bounding frames, categories, states, and object names for individual objects ((See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential)) and MPEP 2106.04(a)(2)(I), Mathematical Concepts PEG Example 47, Claim 2, requires specific mathematical calculations (a backpropagation algorithm and a gradient descent algorithm) to perform the training of the ANN and therefore encompasses mathematical
concepts), and
the controller is configured to
determine whether the controller fails to estimate the information indicating the object for any of the objects within their bounding frame (See MPEP 2106.04(a)(2)(III), mental processes, a claim to identifying head shape and applying hair designs, which is a process that can be practically performed in the human mind, In re Brown, 645 Fed. App'x 1014, 1016-17 (Fed. Cir. 2016) (non-precedential)), and
when the controller fails to estimate the information for at least one object and succeeds in estimating the category and the orientation of the at least one object, the controller controls the display device to display an instruction to move the at least one object so that the best surface to use to estimate the information indicating the at least one object based on the category of the at least one object is captured with reference to the orientation of the at least one object (See MPEP 2106.04(a)(2)(II), certain methods of organizing human activity, The sub-grouping “managing personal behavior or relationships or interactions between people” include social activities, teaching, and following rules or instructions.).
Claim 1 (and similarly Claims 9 and 10) is directed to a series of steps for estimating an object in an image, which is a mental process, and displaying an image and generating instructions regarding the object, which is managing personal behavior or relationships or interactions between people, and thus grouped as a certain method of organizing human interactions. The mere nominal recitation of an imager to generate an image, a controller including a multilayer-structure neural network, a display device, and an object does not take the claim out of the method of organizing human interactions, nor mental process. Thus, Claim 1 (and similarly Claims 9 and 10) recites an abstract idea.
Step 2A – Prong 2: Can the Judicial Exception Recited be integrated into a practical application
Limitations that are indicative of integration into a practical application:
Improvements to the functioning of a computer, or to any other technology or technical field - see MPEP 2106.05(a)
Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition – see Vanda Memo
Applying the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b)
Effecting a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c)
Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo
Limitations that are not indicative of integration into a practical application:
Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)
Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)
Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h)
The identified abstract idea of exemplary Claim 1 (and similarly Claims 9 and 10) is not integrated into a practical application. The additional elements are: an image, a controller including a multilayer-structure neural network, a display device, and an object that implements the underlying abstract idea. These additional elements are broadly recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application. Claim 1 (and similarly Claims 9 and 10) is directed to an abstract idea.
Step 2B – Significantly More Analysis
Claim 1 (and similarly Claims 9 and 10) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and in combination, steps a) generate an image, b) estimate an object contained in the image and a category of the object, c) display the image and the information indicating the object, and d) display an instruction to move the object based on the category when the controller fails to estimate of the object in a recognition processing and succeeds in estimating the category and the orientation of the first object, do not add significantly more to the exception because they amount to merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Claim 1 (and similarly Claims 9 and 10) is ineligible.
Claim 2 recites the abstract ideas of organizing human activity and mental processes. See MPEP 2106.04(a)(2)(II) and MPEP 2106.04(a)(2)(III).
Claim 3 recites the abstract idea mental processes. See MPEP 2106.04(a)(2)(III).
Claim 4 recites the abstract ideas of organizing human activity. See MPEP 2106.04(a)(2)(II).
Claim 5 recites the abstract idea of organizing human activity. See MPEP 2106.04(a)(2)(II).
Claim 6 recites the abstract idea mental processes. See MPEP 2106.04(a)(2)(III).
Claim 8 recites the abstract idea of mental processes. MPEP 2106.04(a)(2)(III). For the additional limitation of a feature point estimator, the examiner refers to the "apply it" rationale of MPEP 2106.05(f).
Prior Art
The prior arts of record fail to teach the overall combination as claimed in Claims 1-6 and 8-10. Therefore, it would not have been obvious to one of ordinary skill in the art to modify the prior art to meet the combination above without unequivocal hindsight and one of ordinary skill would have no reason to do so. Exemplary claim 1 recites the following:
An information processing system comprising:
an imager configured to generate an image;
a controller configured to estimate information indicating an object contained in the image, a category and an orientation of the object based on the image; and
a display device configured to display the image and the information indicating the object,
wherein the controller includes a multilayer-structure neural network that is configured to function as:
a feature point estimator configured to estimate a feature point of an image generated by the imager based on the image;
a boundary estimator configured to estimate a bounding frame of an object contained in the image based on a feature point estimated by the feature point estimator;
a category estimator configured to estimate a category of an object inside the bounding frame based on a feature point estimated by the feature point estimator;
a state estimator configured to estimate a state of an object inside the bounding frame based on a feature point estimated by the feature point estimator; and
an object estimator configured to estimate an object inside the bounding frame based on a feature point estimated by the feature point estimator, the feature point estimator, the boundary estimator, the category estimator, the state estimator, and the object estimator being built using supervised learning, and the feature point estimator being built by training using images labeled with bounding frames, categories, states, and object names for individual objects, and
the controller is configured to
determine whether the controller fails to estimate the information indicating the object for any of the objects within their bounding frame, and
when the controller fails to estimate the information for at least one object and succeeds in estimating the category and the orientation of the at least one object, the controller controls the display device to display an instruction to move the at least one object so that the best surface to use to estimate the information indicating the at least one object based on the category of the at least one object is captured with reference to the orientation of the at least one object. (Emphasis added to highlight features that distinguish over the prior art).
As further explained below, the prior art of record, alone or in combination, neither anticipates, reasonably teaches, nor renders obvious the Applicant’s claimed invention.
US Pat Pub 2023/0143661 “Konemura” discloses a learned neural network learned in such a way as to output, when an object image is input, a geometric transformation parameter relevant to the object image. The object image is an image of an object identified based on object information of the first teaching data including an image and the object information including a category, a position, and a size of an object included in the image. The calculation unit calculates an orientation of the object, based on the geometric transformation parameter being output from the first neural network. The generation unit generates, by adding the orientation of the object being calculated by the calculation unit to the first teaching data, second teaching data including an image and object information including a category, a position, a size, and an orientation of an object included in the image. Konemura fails to disclose determining whether a controller fails to estimate information for at least one object and estimating the category and the orientation of the object when a controller uses a feature point estimator to estimate a feature point, a boundary estimator configured to estimate a bounding frame, a category estimator estimates a category of an object inside the bounding frame, a state estimator estimated to estimate a state of an object, an object estimator to estimate an object inside the bounding frame.
US Pat Pub 2017/0163882 “Piramuthu” teaches facilitating automatic-guided image capturing and presentation are presented. In some embodiments, the method includes capturing an image of an item, removing automatically a background of the image frame, performing manual mask editing, generating an item listing, inferring item information from the image frame and automatically applying the inferred item information to an item listing form, and presenting an item listing in an augmented reality environment. Piramuthu fails to teach determining whether a controller fails to estimate information for at least one object and estimating the category and the orientation of the object when a controller uses a feature point estimator to estimate a feature point, a boundary estimator configured to estimate a bounding frame, a category estimator estimates a category of an object inside the bounding frame, a state estimator estimated to estimate a state of an object, an object estimator to estimate an object inside the bounding frame.
US Pat Pub 2019/0370593 “Nakao” teaches image recognition performed by an object recognition function and a first category recognition function on a captured image acquired from an image capture display device fails, and image recognition performed by a second category recognition function succeeds, informs a user of a method for capturing an image that enables object recognition, and causes the object recognition function to perform image recognition on another captured image that is captured. Nakao fails to teach determining whether a controller fails to estimate information for at least one object and estimating the category and the orientation of the object when a controller uses a feature point estimator to estimate a feature point, a boundary estimator configured to estimate a bounding frame, a category estimator estimates a category of an object inside the bounding frame, a state estimator estimated to estimate a state of an object, an object estimator to estimate an object inside the bounding frame.
Response to Arguments
35 USC 101
Applicant's arguments filed January 22, 2026 have been fully considered but they are not persuasive. Applicant argues that amended claim 1, 9, and 10, which recited a controller that functions as various estimators to estimate an object within a bounding frame based on a feature point, are significantly more than an abstract idea, and these features are integrated into a practical application. However, it is found that the use of various estimators to estimate an object within a bounding frame based on a feature point amount to broadly recited computer elements that do not add a meaningful limitation to the abstract idea because they amount to merely using a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). An applicable example would be requiring the use of software to tailor information and provide it to the user on a generic computer, Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1370-71, 115 USPQ2d 1636, 1642 (Fed. Cir. 2015). The newly added features of amended independent claims 1, 9, and 10 include a multilayer-structure neural network and building estimators using supervised learning. These elements are found to be similar to the use and training of an artificial neural network in PEG Example 47, Claim 2. The use of the ANN in that example recited “encompasses performing evaluation, judgment, and opinion to make a determination about detected anomalies. Under its broadest reasonable interpretation when read in light of the specification, the “analyzing” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. As discussed above, the broadest reasonable interpretation of discretizing in step (b) also encompasses mathematical concepts (e.g., rounding data values) that can be performed mentally. Step (c) requires specific mathematical calculations (a backpropagation algorithm and a gradient descent algorithm) to perform the training of the ANN and therefore encompasses mathematical concepts.” As such, the newly amended claim features of a multilayer-structure neural network and building estimators using supervised learning are found to be directed to the abstract ideas mental processes being accomplished using mathematical concepts.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REVA R MOORE whose telephone number is (571)270-7942. The examiner can normally be reached M-Th: 9:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd Obeid can be reached at 571-270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REVA R MOORE/Examiner, Art Unit 3627
/FAHD A OBEID/Supervisory Patent Examiner, Art Unit 3627