Prosecution Insights
Last updated: April 19, 2026
Application No. 18/314,838

INFORMATION PROCESSING APPARATUS AND METHOD

Non-Final OA §101§103
Filed
May 10, 2023
Examiner
AZIMA, SHAGHAYEGH
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Toshiba TEC Kabushiki Kaisha
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
286 granted / 350 resolved
+19.7% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
36 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is in response to the applicant's communication filed on 05/10/2023. In virtue of this communication, claims 1-18 filed on 05/10/2023 are currently pending in the instant application. Information Disclosure Statement The information Disclosure statement (IDS) form PTO-1449, filed on 05/10/2023 are in compliance with the provisions of CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner. Drawings The drawings were received on 05/10/2023 have been reviewed by Examiner and they are acceptable. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: person detector, distance calculation component, lost property determination component, notification component, image comparison component, abnormality detector, registration component, settlement component… in claims 1-6 and 13-18. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claim(s) 1, 7, and 13 recite(s): acquiring an image captured by an imaging device; detecting a person from the acquired image; and detecting an object separated from the detected person; calculating a distance between the detected person and the detected object; and determining that, when the calculated distance is equal to or greater than a threshold over a predetermined time or longer, the object is lost property. Step 1: With regard to Step 1, the instant claims are directed to an apparatus, a method, and a point of sale terminal, all among the statutory categories of invention. Step 2A — Prong 1: With regard to Step 2A — Prong 1, for example in method Claim 7, the limitations “acquiring an image captured by an imaging device; detecting a person from the acquired image;” and “detecting an object separated from the detected person;” “calculating a distance between the detected person and the detected object;” and “determining that, when the calculated distance is equal to or greater than a threshold over a predetermined time or longer, the object is lost property.”, as recited, is a method that, under its broadest reasonable interpretation, covers performance of the limitation in the mind/observation of a person inspecting an image/picture of an environment, which includes a person and an object and based on observation of their distances to each other, one may render an opinion as to the object has been lost or dropped. That is, other than reciting “by a computer" nothing in the claim steps preclude the limitations from practically being performed in the mind or through observation of a person inspecting an image of an environment. The recited computer is simply a generic device. If a claim limitation, under its broadest reasonably interpretation covers performance of the limitation in the mind but for the recitation of a generic components, then it falls within the "Mental processes" grouping of the abstract idea, which include concepts performed in the human mind, including an observation, evaluation, judgement, opinion. Accordingly, the claim recites an abstract idea. In addition, the additional components recited in independent Claims 1 and 13, i.e., a POS terminal or apparatus are simply generic computing components, accordingly, these independent claims include the above- described abstract idea. Step 2A — Prong 2: The 2019 PEG defines the phrase “integration into a practical application’ to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, the additional elements in the claims do not apply, rely on, or use the judicial exception. This judicial exception is not integrated into a practical application because the claims only recite additional elements using a computer, a processor, or terminal, for instance, that includes to perform the recited elements/functions/steps. These computing components in all are recited at high-level of generality and there are no other recited additional limitations in the claims. Accordingly, these additional steps/element do not integrate the abstract idea into a practical application because it is a field-of-use limitation that does not impose any meaningful limits on practicing the abstract idea. Therefore, independent Claims 1, 7, and 13 recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as discussed above with respect to integration of the abstract idea into practical application, the additional element of using, computer, a processor, or POS terminal to execute programming instructions to perform the step amounts to no more than mere instructions to apply the exception using a generic apparatus component. Mere instructions to apply an exception using generic apparatus component cannot provide an inventive concept. The claim is not patent eligible. Further, with regard to dependent Claims 2-6, 8-12, 14-18 viewed individually, these additional elements are under their broadest reasonable interpretation, cover performance of the limitation in the mind and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, Claims 1-18 are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 6-9, 12-15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki (JP 2021111033 A), further in view of Nakamura (US 2020/0005044). As per claim 1, An information processing apparatus, comprising: “an image acquisition component configured to acquire an image captured by an imaging device;” (Kazuki, ¶[0006] discloses the object monitoring device according to the present disclosure includes an image acquisition unit that acquires images captured by an imaging device, an object detection unit that detects the target object and the holder of the object from the images acquired by the image acquisition unit. Figure. 1, ¶[0013] discloses imaging device 10 includes imaging unit 11.) “a person detector configured to detect a person from the image acquired by the image acquisition component;”(Kazuki, ¶[0006] discloses the object monitoring device according to the present disclosure includes an image acquisition unit that acquires images captured by an imaging device, an object detection unit that detects the target object and the holder of the object from the images acquired by the image acquisition unit, ¶[0013] discloses The imaging device 10 is a device for imaging a person at the installation location, and includes an imaging unit 11 for imaging the image. A person and an object possessed by the person are detected, and the state of possession of the object, that is, the state of possession of the object and the state of the object being separated from the person ) “an abnormality detector configured to detect an object separated from the person detected by the person detector; and determine the object is lost property”(Kazuki, ¶[0006] discloses detects a change in the state of possession of the object from the multiple images acquired by the image acquisition unit, and an information storage unit that, when the object detection unit detects a change in the state of possession from a state in which the object was held by the holder to a state in which the object has been separated from the holder. ¶[0009] discloses it is possible to monitor items carried by a person to prevent them from being lost. Further ¶[0013] discloses a person and an item carried by that person are detected, and the state of the item, that is, whether the item is being carried by the person or is separated from the person, is detected. ) “a distance calculation component configured to calculate a distance to the object” (Kazuki, ¶[0013] discloses the imaging unit 11 is capable of measuring the distance to an object located at the center of the imaging range. ¶[0036] discloses The image processing unit 51 outputs the distance measurement result transmitted from the imaging device 10 to the position coordinate calculation unit 52 . The position coordinate calculation unit 52 uses the distance measurement results, the latest rotation angle stored in the rotation angle memory unit 54, and the detection results by the object detection unit 56 to generate information about the position where the object 80 moved away from the holder 70 and stopped. ¶[0039-0040] and ¶[0070], [0073-0074] and ¶[0077] discloses determining when predetermined period of time passed from separating the object from the holder position.) However Kazuki is silent on the following which would have been obvious in view of Nakamura from a similar field of endeavor “a distance calculation component configured to calculate a distance between the person detected by the person detector and the object detected by the abnormality detector; and a lost property determination component configured to determine that, when the distance calculated by the distance calculation component is equal to or greater than a threshold over a predetermined time or longer.”(Nakamura, ¶[0066] discloses the left object detecting unit 1012 can detect that an object has been left behind when passengers and luggage associated with each other have been, for example, separated from each other for a predetermined amount of time, or when the distance between passengers disembarking and the luggage associated with them exceeds a predetermined distance.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Nakamura technique of left object detection into Kazuki technique to provide the known and expected uses and benefits of Nakamura technique over object monitoring technique of Kazuki. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Nakamura to Kazuki in order to accurately detect and notify lost objects to users. (Refer to Nakamura paragraph [0004].) Claims 7 and 13 have been analyzed and are rejected for the reasons indicated in claim 1 above. As per claim 2, The information processing apparatus according to claim 1, Kazuki as modified by Nakamura further discloses “a storage controller configured to associate an image indicating the person detected by the person detector, an image indicating the object that is separated from the person, and a position of the object with one another, and to store the associated information in a storage device.” (Kazuki, ¶[0006] discloses and an information storage unit that, when the object detection unit detects a change in the state of possession from a state in which the object was held by the holder to a state in which the object has been separated from the holder, stores location information indicating the position of the object after it has been separated from the holder.) Claims 8 and 14 have been analyzed and are rejected for the reasons indicated in claim 2 above. As per claim 3, The information processing apparatus according to claim 1, Kazuki as modified by Nakamura further discloses “a notification component configured to perform a notification under a condition that the lost property determination component determines that there is lost property.”(Kazuki, ¶[0071] discloses the alarm control unit 59 executes control to notify the holder 70 that the object 80 is not in his/her possession. For this control, the operation terminal 60 is provided with an alarm device 63 for notifying the owner 70 of information. ¶[0086].) Claims 9 and 15 have been analyzed and are rejected for the reasons indicated in claim 3 above. As per claim 6, The information processing apparatus according to claim 1, Kazuki as modified by Nakamura further discloses “wherein the abnormality detector is further configured to record a date and time of detection of the object separated from the person detected by the person detector.” (Nakamura, ¶[0021] discloses the video footage information gathering unit 1011 can gather the video footage captured by the cameras installed in the plurality of locations. The video footage information gathering unit 1011 can, for example, receive video footage captured at a transportation facility, and store the video footage in association with information on the location and time of the capturing and the like in the database DB1. It should be noted that the video footage information gathering unit 1011 may manage the video footage such that at least video footage is stored in the database DB1 reaching back from the current time to a predetermined time in the past, or such that video footage prior to the predetermined time may be deleted from the database DB1.¶[0053]. ) Claims 12 and 18 have been analyzed and are rejected for the reasons indicated in claim 6 above. Claim(s) 4, 10, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki ( JP 2021111033 A), further in view of Nakamura (US 2020/0005044), further in view of Hill (CN 102685256 B). As per claim 4, The information processing apparatus according to claim 2, However Kazuki as modified by Nakamura does not explicitly disclose the following which would have been obvious in view of Hill from similar filed of endeavor “wherein the storage controller deletes information related to the lost property from the storage device under a condition that information indicating that the lost property is returned to an owner is received.” (Hill, ¶[0067] discloses after the device is found, the user can be removed from the server about device loss are noted, the record is removed from the blacklist, and not executing the search for the device.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Hill technique of locating lost objects into Kazuki as modified by Nakamura technique to provide the known and expected uses and benefits of Hill technique over object monitoring technique of Kazuki as modified by Nakamura. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Hill to Kazuki as modified by Nakamura in order to accurately locate the position of the lost item. (Refer to Hill paragraph [0005].) Claims 10 and 16 have been analyzed and are rejected for the reasons indicated in claim 4 above. Claim(s) 5, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kazuki ( JP 2021111033 A), further in view of Nakamura (US 2020/0005044), further in view of Inoue (CN 100411411 C). As per claim 5, The information processing apparatus according to claim 1, further comprising: However Kazuki as modified by Nakamura does not explicitly disclose the following which would have been obvious in view of Inoue from similar filed of endeavor “an image comparison component configured to compare an image indicating the person detected by the person detector with an image of a declarer who makes a declaration that the declarer is an owner of the lost property, wherein the lost property determination component determines that the declarer is the owner under a condition that the images of the persons compared by the image comparison component match.”( page 6, line 8-16 discloses camera 7 is arranged for collecting the face image of user when the user operation. the collected image data with the recording of the user data. based on the comparison of the facial image, approving or verifying whether the current user is a mobile phone of the owner (in step S55). if checking the current user is not the legal user (owner) mobile telephone 1, i.e., the face image of the user with the recorded face image does not match, then the use function of the mobile phone 1 is limited (in step S56). if the current user authentication is legal mobile phone of the user (owner), the user verification is finished.) Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to combine Inoue technique of verifying owner of lost item into Kazuki as modified by Nakamura technique to provide the known and expected uses and benefits of Inoue technique over object monitoring technique of Kazuki as modified by Nakamura. The proposed combination would have constituted a mere arrangement of old elements with each performing their known function, the combination yielding no more than one would expect from such an arrangement. Therefore, it would have been obvious to a person of ordinary skill in the art to incorporate Inoue to Kazuki as modified by Nakamura in order to accurately detecting facial features for verifying lawful owner of a lost item. (Refer to Inoue page 1.) Claims 11 and 17 have been analyzed and are rejected for the reasons indicated in claim 5 above. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAGHAYEGH AZIMA whose telephone number is (571)272-1459. The examiner can normally be reached Monday-Friday, 9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571)272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAGHAYEGH AZIMA/ Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

May 10, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586350
DETERMINING AUDIO AND VIDEO REPRESENTATIONS USING SELF-SUPERVISED LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573209
ROBUST INTERSECTION RIGHT-OF-WAY DETECTION USING ADDITIONAL FRAMES OF REFERENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12561989
VEHICLE LOCALIZATION BASED ON LANE TEMPLATES
2y 5m to grant Granted Feb 24, 2026
Patent 12530867
Action Recognition System
2y 5m to grant Granted Jan 20, 2026
Patent 12525049
PERSON RE-IDENTIFICATION METHOD, COMPUTER-READABLE STORAGE MEDIUM, AND TERMINAL DEVICE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month