Prosecution Insights
Last updated: April 19, 2026
Application No. 18/242,534

APPARATUS, APPARATUS CONTROL METHOD, AND RECORDING MEDIUM

Non-Final OA §102
Filed
Sep 06, 2023
Examiner
MANCHO, RONNIE M
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Casio Computer Co. Ltd.
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
79%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
729 granted / 963 resolved
+23.7% vs TC avg
Minimal +3% lift
Without
With
+3.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
42 currently pending
Career history
1005
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
26.3%
-13.7% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
32.1%
-7.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 963 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hasegawa US 20220297307. Regarding claim 1, Hasegawa discloses a robot that imitates a living thing (robot grows with time, expresses emotions, movement, etc; abstract; sec 0007, 0009, 0039), the robot, comprising: a memory (sec 0005, 0052, 0053); a controller (110; sec 0050-0060) configured to execute a processing: setting, in the memory in response to detection of an external stimulus (sec 0050-0060) acting on the apparatus, an emotion parameter expressing a pseudo-emotion (sec 0066-0072, 0077, 0090, 0017-0122, 0111, 0151, 0169); measuring an elapsed time since a most recent detection of the external stimulus acting on the robot (sec 0090, 0017-0122, 0111, 0151, 0169); and changing the set emotion parameter, set in the memory in response to a determination that the measured elapsed time exceeds a predetermined amount of time (emotion parameter is changed when the emotion reaches a certain time; sec 0072-0077, 0109; in addition, WHEN the external stimulus is not detected in a predetermined period, that is the predetermined period of time is the time the processing exits the loop at step S512(yes) or S513 (yes); see Fig. 20, steps S510 to S513; sec 0077, 0180-0190, 0219, 0220}. Applicant is direction the prior art as shown below: [0090] Although not illustrated, the storage unit 120 also stores four character correction values (happy correction value, active correction value, shy correction value, and wanted correction value) which are increased or decreased in a character correction value adjustment process to be described later. Although the respective character values (character value (happy), character value (active), character value (shy), and character value (wanted)) are fixed in the case where the simulated growth of the robot 200 is completed, the character correction value is data (character correction data) for correcting the character according to how the user interacts with the robot 200 even after the completion of the growth. As will be described later, the character correction value is set on the emotion map 300 according to a condition (second condition based on the external stimulus data) based on any location of an area where the emotion data 121 has been present for the longest time. Note that the second condition based on the external stimulus data is not limited to the above condition, but any condition can be set as long as being a condition for correcting the character after the size of the emotion map 300 is fixed (for example, condition related to the occurrence frequency of the simulated emotion of the robot 200 represented by the emotion data 121). [0111] On the other hand, if there is no external stimulus in step S102 (step S102; No), the processing unit 110 determines whether to perform a spontaneous movement such as a breathing movement (step S107). Although a method of determining whether to perform the spontaneous movement is arbitrary, it is assumed that the determination in step S107 is “Yes” every first reference time (for example, 5 seconds) in the embodiment. [0112] If the spontaneous movement is performed (step S107; Yes), the processing unit 110 proceeds to step S106 and executes the movement selection process with a “lapse of the first reference time” as the movement trigger, and then, proceeds to step S108. Regarding claim 2, Hasegawa discloses the robot according to claim 1, wherein the processes comprise changing the emotion parameter so that the apparatus appears to be in a calm state (sec 0097, 0100, 0103, 0106) when the pseudo-emotion is close to the predetermined state and, also, the external stimulus is not detected in the predetermined period (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 3, Hasegawa discloses the robot according to claim 2, wherein the processes comprise changing the emotion parameter in a step-wise manner so that the apparatus appears to be in the calm state due to the pseudo-emotion becoming a neutral emotion when the pseudo-emotion is close to the predetermined state and, also, the external stimulus is not detected in the predetermined period (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 4, Hasegawa discloses the robot according to claim 2, wherein the processes comprise changing a quantity of the emotion parameter that is changed based on a period in which the external stimulus is not detected (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 5, Hasegawa discloses the robot according to claim 1, wherein the controller changes a quantity of the emotion parameter that is changed based on a pseudo personality of the robot (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 6, Hasegawa discloses the robot according to claim 1, further comprising: a battery (sec 0179), wherein the processes comprise changing an amount of change of the emotion parameter in accordance with an amount of charge of the battery (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 7, Hasegawa discloses the robot according to claim 1, farther comprising: a battery (sec 0179), wherein the processes comprise changing an amount of change of the emotion parameter based on whether charging of the battery is being performed (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 8, Hasegawa discloses a robot according method executed by a controller of a robot that imitates a living thing (robot grows with time, expresses emotions, movement, etc; abstract; sec 0007, 0009, 0039), the robot including the controller and a memory (sec 0005, 0052, 0053), and the method comprising: setting in the memory in response to detection of an external stimulus acting on the robot, an emotion parameter expressing a pseudo-emotion of the robot (sec 0072-0077, 0097-0109, 0108-0190); measuring an elapsed time since a most recent detection of the external stimulus acting on the robot (sec 0090, 0017-0122, 0111, 0151, 0169); and determining whether that the measured elapsed time exceeds a predetermined amount of time (emotion parameter is changed when the emotion reaches a certain time; sec 0072-0077, 0109; in addition, WHEN the external stimulus is not detected in a predetermined period, that is the predetermined period of time is the time the processing exits the loop at step S512(yes) or S513 (yes); see Fig. 20, steps S510 to S513; sec 0077, 0180-0190, 0219, 0220}; and controlling the emotion parameter set in the memory to change in response to a determination in the determining that the measured elapsed time exceeds the predetermined amount of time(sec 0090, 0017-0122, 0111, 0151, 0169). Applicant is direction the prior art as shown below: [0090] Although not illustrated, the storage unit 120 also stores four character correction values (happy correction value, active correction value, shy correction value, and wanted correction value) which are increased or decreased in a character correction value adjustment process to be described later. Although the respective character values (character value (happy), character value (active), character value (shy), and character value (wanted)) are fixed in the case where the simulated growth of the robot 200 is completed, the character correction value is data (character correction data) for correcting the character according to how the user interacts with the robot 200 even after the completion of the growth. As will be described later, the character correction value is set on the emotion map 300 according to a condition (second condition based on the external stimulus data) based on any location of an area where the emotion data 121 has been present for the longest time. Note that the second condition based on the external stimulus data is not limited to the above condition, but any condition can be set as long as being a condition for correcting the character after the size of the emotion map 300 is fixed (for example, condition related to the occurrence frequency of the simulated emotion of the robot 200 represented by the emotion data 121). [0111] On the other hand, if there is no external stimulus in step S102 (step S102; No), the processing unit 110 determines whether to perform a spontaneous movement such as a breathing movement (step S107). Although a method of determining whether to perform the spontaneous movement is arbitrary, it is assumed that the determination in step S107 is “Yes” every first reference time (for example, 5 seconds) in the embodiment. [0112] If the spontaneous movement is performed (step S107; Yes), the processing unit 110 proceeds to step S106 and executes the movement selection process with a “lapse of the first reference time” as the movement trigger, and then, proceeds to step S108. Regarding claim 9, Hasegawa discloses the method according to claim 8, wherein the controlling includes: changing the emotion parameter so that the robot appears to be in a calm state when the pseudo-emotion is close to the predetermined state and, also, the external stimulus is not detected in the predetermined period (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 10, Hasegawa discloses the method according to claim 9, wherein the controlling includes: changing the emotion parameter in a step-wise manner so that the robot appears to be in the calm state due to the pseudo-emotion becoming a neutral emotion when the pseudo- emotion is close to the predetermined state and, also, the external stimulus is not detected in the predetermined period (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 11, Hasegawa discloses the method according to claim 8, wherein the controlling includes changing a quantity of the emotion parameter that is changed based on a measured elapsed time since the most recent detection of the external stimulus acting on the robot (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 12, Hasegawa discloses the method according to claim 8, wherein the controlling includes changing a quantity of the emotion parameter that is changed based on a pseudo personality of the robot (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 13, Hasegawa discloses the method according to claim 8, wherein: the robot further includes a battery (sec 0072-0077, 0097-0109, 0108-0190), and the controlling includes changing an amount of change of the emotion parameter in accordance with an amount of charge of the battery (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 14, Hasegawa discloses the method according to claim 8, wherein: the robot further includes a battery (sec 0179), and the controlling includes changing an amount of change of the emotion parameter based on whether charging of the battery is being performed (sec 0072-0077, 0097-0109, 0108-0190). Regarding claim 15, Hasegawa discloses a non-transitory computer readable recording medium storing a program thereon, the program being executable by a controller of a robot that imitates a living thing (robot grows with time, expresses emotions, movement, etc; abstract; sec 0007, 0009, 0039), the robot including the controller and a memory (sec 0005, 0052, 0053), and the program controlling the controller to executing processing comprising: setting in the memory in response to detection of an external stimulus acting on the robot, an emotion parameter expressing a pseudo-emotion of the robot (sec 0072-0077, 0097-0109, 0108-0190); measuring an elapsed time since a most recent detection of the external stimulus acting on the robot (sec 0090, 0017-0122, 0111, 0151, 0169); and changing the set emotion parameter set in the memory in response to a determination that the elapsed time exceeds a predetermined amount of time (emotion parameter is changed when the emotion reaches a certain time; sec 0072-0077, 0109; in addition, WHEN the external stimulus is not detected in a predetermined period, that is the predetermined period of time is the time the processing exits the loop at step S512(yes) or S513 (yes); see Fig. 20, steps S510 to S513; sec 0077, 0180-0190, 0219, 0220}. Applicant is direction the prior art as shown below: [0090] Although not illustrated, the storage unit 120 also stores four character correction values (happy correction value, active correction value, shy correction value, and wanted correction value) which are increased or decreased in a character correction value adjustment process to be described later. Although the respective character values (character value (happy), character value (active), character value (shy), and character value (wanted)) are fixed in the case where the simulated growth of the robot 200 is completed, the character correction value is data (character correction data) for correcting the character according to how the user interacts with the robot 200 even after the completion of the growth. As will be described later, the character correction value is set on the emotion map 300 according to a condition (second condition based on the external stimulus data) based on any location of an area where the emotion data 121 has been present for the longest time. Note that the second condition based on the external stimulus data is not limited to the above condition, but any condition can be set as long as being a condition for correcting the character after the size of the emotion map 300 is fixed (for example, condition related to the occurrence frequency of the simulated emotion of the robot 200 represented by the emotion data 121). [0111] On the other hand, if there is no external stimulus in step S102 (step S102; No), the processing unit 110 determines whether to perform a spontaneous movement such as a breathing movement (step S107). Although a method of determining whether to perform the spontaneous movement is arbitrary, it is assumed that the determination in step S107 is “Yes” every first reference time (for example, 5 seconds) in the embodiment. [0112] If the spontaneous movement is performed (step S107; Yes), the processing unit 110 proceeds to step S106 and executes the movement selection process with a “lapse of the first reference time” as the movement trigger, and then, proceeds to step S108. Regarding claim 16, Hasegawa discloses the robot according to claim 1, wherein the processes comprise increasing a change amount of the emotion parameter that is changed as the measured elapsed time increases, such that: (i) the emotion parameter is changed by a first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220), and (ii) the emotion parameter is changed by a second change amount greater than the first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a second predetermined amount of time longer than the first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220). Regarding claim 17, Hasegawa discloses the robot according to claim 1, wherein the processes comprise changing the emotion parameter in response to the determination that the measured elapsed time exceeds the predetermined amount of time, in accordance with a first rule when the currently set emotion parameter satisfies a first condition, and in accordance with a second rule different from the first rule when the currently set emotion parameter satisfies a second condition different from the first condition (sec 0077, 0180-0190, 0219, 0220). Regarding claim 18, Hasegawa discloses the method according to claim 8, further comprising increasing a change amount of the emotion parameter that is changed as the measured elapsed time increases, such that: (i) the emotion parameter is changed by a first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220), and (ii) the emotion parameter is changed by a second change amount greater than the first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a second predetermined amount of time longer than the first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220). Regarding claim 19, Hasegawa discloses the non-transitory computer-readable recording medium according to claim 15, wherein the processes further comprise increasing a change amount of the emotion parameter that is changed as the measured elapsed time increases, such that: (i) the emotion parameter is changed by a first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220), and (ii) the emotion parameter is changed by a second change amount greater than the first change amount in response to a determination that the measured elapsed time exceeds, as the predetermined amount of time, a second predetermined amount of time longer than the first predetermined amount of time (sec 0077, 0180-0190, 0219, 0220). Regarding claim 20, Hasegawa discloses the non-transitory computer-readable recording medium according to claim 15, wherein the processes further comprise changing the emotion parameter in response to the determination that the measured elapsed time exceeds the predetermined amount of time, in accordance with a first rule when the currently set emotion parameter satisfies a first condition, and in accordance with a second rule different from the first rule when the currently set emotion parameter satisfies a second condition different from the first condition (sec 0077, 0180-0190, 0219, 0220). Response to Arguments Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to RONNIE MANCHO whose telephone number is (571)272-6984. The examiner can normally be reached Mon-Thurs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at 571 270 5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RONNIE M MANCHO/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
May 17, 2025
Non-Final Rejection — §102
Aug 21, 2025
Response Filed
Oct 03, 2025
Final Rejection — §102
Jan 12, 2026
Examiner Interview Summary
Jan 12, 2026
Response after Non-Final Action
Jan 12, 2026
Applicant Interview (Telephonic)
Feb 05, 2026
Request for Continued Examination
Feb 26, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600242
COMPUTER-IMPLEMENTED METHOD OF CONTROLLING FUTURE BRAKING CAPACITY OF A VEHICLE TRAVELLING ALONG A ROAD
2y 5m to grant Granted Apr 14, 2026
Patent 12597350
COLLISION ALERT DEVICE AND COLLISION ALERT METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12594682
WIRE-BODY FIXING MEMBER, WIRE-BODY-EXTENSION FIXING MEMBER, AND WIRE-BODY FITTING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12582490
REAL TIME IMAGE GUIDED PORTABLE ROBOTIC INTERVENTION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12583334
SYSTEMS AND METHODS TO PREDICT AND APPLY REGENERATIVE BRAKING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
79%
With Interview (+3.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 963 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month