Prosecution Insights
Last updated: April 19, 2026
Application No. 19/100,086

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

Non-Final OA §102§103
Filed
Jan 30, 2025
Examiner
LIN, HANG
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Maxell, Ltd.
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
2y 3m
To Grant
65%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
295 granted / 455 resolved
+2.8% vs TC avg
Minimal +0% lift
Without
With
+0.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
12 currently pending
Career history
467
Total Applications
across all art units

Statute-Specific Performance

§101
1.1%
-38.9% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 455 resolved cases

Office Action

§102 §103
333DETAILED ACTION Status of Application Claims 1-15 are pending in the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-9 and 13-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Latta et al. (US 20130326364 A1). Regarding claim 1, Palos teaches an information processing device to be worn on a user, comprising: a display processing device configured to display an augmented reality object within a field of view display range; (Para 29. head mounted display device 2 with FOV.) a walking detection sensor configured to detect a walking state of the user; and a processor, the processor being configured to execute control of: determining whether the user is walking based on sensor information by the walking detection sensor, (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger, which means there is a walking detection sensor. Para 29-31 show processor) and upon determining that the user is walking, enlarging the augmented reality object; and displaying the augmented reality object as enlarged within the field of view display range. (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger) Regarding claim 2, Latta already teaches the information processing device according to claim 1, And Latta further teaches wherein the field of view display range has a central zone including a center point of the field of view display range and a peripheral zone located at an outer edge of the central zone, (Para 146. FOV inherently includes a central zone with a center point and a peripheral zone at an outer edge of the central zone) and the processor enlarges the augmented reality object being displayed in the peripheral zone but does not enlarge the augmented reality object being displayed in the central zone. (Para 126, 146. So if the user is walking closer to the object located in the peripheral zone but is not walking closer to the augmented reality object being displayed in the central zone, like walking around the virtual object which is in the central zone as shown in paragraph 126 while walking closer to a virtual object located in peripheral zone) Regarding claim 3, Latta already teaches the information processing device according to claim 2, And Latta further teaches wherein the processor enlarges the augmented reality object being displayed in the peripheral zone to be larger than the augmented reality object being displayed in the central zone. (Para 126, 146. So if the user is walking closer to the object located in the peripheral zone but is not walking closer to the augmented reality object being displayed in the central zone, like walking around the virtual object which is in the central zone as shown in paragraph 126 while walking closer to a virtual object located in peripheral zone). Regarding claim 4, Latta already teaches the information processing device according to claim 2, And Latta further teaches and further comprising a head motion detection sensor configured to detect a motion of a head of the user wearing the information processing device, (Para 38, 96. Head motion is detected) wherein upon detecting the motion of the head of the user based on sensor information by the head motion detection sensor when determining that the user is stationary based on the sensor information by the walking detection sensor, (Para 96, 126. This shows the HMD can detect head motion and user walking, and user can be stationary when turning its head) the processor moves a position of the field of view display range following the motion of the head, (Para 96) and enlarges the augmented reality object being displayed in the peripheral zone within the field of view display range after moving but does not enlarge the augmented reality object being displayed in the central zone within the field of view display range after moving. (Para 138. So in this case the dynamic object is located in the peripheral zone is enlarged due to collision, and the virtual object in the central zone has no collision) Regarding claim 5, Latta already teaches the information processing device according to claim 4, And Latta further teaches wherein upon detecting the motion of the head of the user based on the sensor information by the head motion detection sensor when determining that the user is walking based on the sensor information by the walking detection sensor, the processor does not change the display size of the augmented reality object before and after detecting the motion of the head. (Para 96, 126. In this case the user is not moving closer to the virtual object). Regarding claim 6, Latta already teaches the information processing device according to claim 2, And Latta further teaches wherein the processor detects a walking speed of the user based on the sensor information by the walking detection sensor, and controls a display size depending on the walking speed when enlarging the augmented reality object. (Para 126. So the virtual object is enlarged sooner if the user walks faster to get closer to the virtual object). Regarding claim 8, Latta teaches an information processing device to be worn on a user, comprising: a display processing device configured to display an augmented reality object within a field of view display range; (Para 29. head mounted display device 2 with FOV) a walking detection sensor configured to detect a walking state of the user; (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger, which means there is a walking detection sensor.) and a processor, the field of view display range having a central zone including a center point of the field of view display range and a peripheral zone located at an outer edge of the central zone, (Para 146. FOV inherently includes a central zone with a center point and a peripheral zone at an outer edge of the central zone. Para 29-31 show processor) and the processor being configured to execute display control of: determining whether the user is walking based on sensor information by the walking detection sensor, and upon determining that the user is walking, moving the augmented reality object being displayed in the central zone to the peripheral zone. (Para 96, 126, 146. So depending on the user’s walking direction, the virtual object may move from the central zone to the peripheral zone) Regarding claim 9, Latta already teaches the information processing device according to claim 8, And Latta further teaches wherein the processor executes the display control of enlarging the augmented reality object being displayed in the central zone (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger in the case when the virtual object is located in the central zone) and moving the augmented reality object as enlarged to the peripheral zone. (Para 96, 126, 146. So depending on the user’s walking direction thereafter, the virtual object may move from the central zone to the peripheral zone) Regarding claim 13, Latta already teaches the information processing device according to claim 1, And Latta further teaches wherein the walking detection sensor is at least one of a camera for capturing an image of a foot of the user, an acceleration sensor, a geomagnetic sensor, or a gyroscope sensor, and the head motion detection sensor is at least one of the acceleration sensor, the geomagnetic sensor, or the gyroscope sensor. (Para 38) Regarding claim 14, Latta teaches an information processing method to be executed by an information processing device to be worn on a user, (Para 29. head mounted display device 2 with FOV.) comprising: a walking determination step of determining whether the user is walking based on sensor information by a walking detection sensor configured to detect a walking state of the user; (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger, which means there is a walking detection sensor. Para 29-31 show processor) an enlargement step of enlarging an augmented reality object to be larger than that in a state where the user is stationary upon determining that the user is walking; and a display step of causing a display processing device to display the augmented reality object as enlarged. (Para 22, 126. So the user’s walk is detected in the direction of the AR object to make the virtual object appear larger) Regarding claim 15, Latta already teaches the information processing method according to claim 14, and Latta further teaches further comprising a display position movement step of moving a display position of the augmented reality object within a field of view display range on the display processing device upon determining that the user is walking, (Para 96, 126, 146. So depending on the user’s walking direction, the virtual objects are moved around relatively) wherein the field of view display range has a central zone including a center point of the field of view display range and a peripheral zone located at an outer edge of the central zone, (Para 146. FOV inherently includes a central zone with a center point and a peripheral zone at an outer edge of the central zone) and the display position movement step includes moving the augmented reality object being displayed in the central zone to the peripheral zone. (Para 96, 126, 146. So depending on the user’s walking direction, the virtual object may move from the central zone to the peripheral zone) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 7 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Latta et al. (US 20130326364 A1), further in view of Connellan et al. (US 20200310561 A1). Regarding claim 7, Latta already teaches the information processing device according to claim 1, However Latta does not teach wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object. However Connellan teaches wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object. (Para 78) Therefore it would have been obvious to one with ordinary skill, before the effective filing date of the invention, to modify Latta with Connellan to teach wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object in order to produce the predictable of virtual object size customization in order to further enhance the user experience of the AR device. Regarding claim 12, Latta already teaches the information processing device according to claim 8, However Latta teaches wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object. However Connellan teaches wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object. (Para 78) Therefore it would have been obvious to one with ordinary skill, before the effective filing date of the invention, to modify Latta with Connellan to teach wherein the processor displays and controls, within the field of view display range, an operation menu for setting and inputting a display size of the augmented reality object in order to produce the predictable of virtual object size customization in order to further enhance the user experience of the AR device. Allowable Subject Matter Claims 10-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HANG LIN whose telephone number is (571)270-7596. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached at 571-272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HANG LIN/Primary Examiner, Art Unit 2626
Read full office action

Prosecution Timeline

Jan 30, 2025
Application Filed
Dec 03, 2025
Non-Final Rejection — §102, §103
Mar 10, 2026
Applicant Interview (Telephonic)
Mar 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579932
DISPLAY CONTROL METHOD AND DEVICE FOR A DISPLAY PANEL, STORAGE MEDIUM, AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12562131
DISPLAY DEVICE AND ELECTRONIC DEVICE INCLUDING THE DISPLAY DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547261
Rotary Control Input Device for a Capacitive Touch Screen
2y 5m to grant Granted Feb 10, 2026
Patent 12548504
DRIVING CIRCUIT, DISPLAY PANEL AND DISPLAY DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12525160
ELECTROLUMINESCENT DISPLAY APPARATUS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
65%
With Interview (+0.3%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 455 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month