Prosecution Insights
Last updated: April 19, 2026
Application No. 18/797,621

RECORDING MEDIUM RECORDING PROGRAM, CONTENT EDITING METHOD, AND INFORMATION PROCESSING DEVICE

Non-Final OA §102
Filed
Aug 08, 2024
Examiner
LETT, THOMAS J
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Seiko Epson Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
47%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
599 granted / 719 resolved
+21.3% vs TC avg
Minimal -36% lift
Without
With
+-36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
27.4%
-12.6% vs TC avg
§102
47.6%
+7.6% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 719 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 4 is objected to because of the following informalities: in line 9, the term “defrorming” should be changed to read “deforming”. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-8 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Walker et al. (US 20220362674 A1). Regarding claim 1, Walker et al. discloses a recording medium recording a program (computer-readable medium, paras. 0110, 0115), the program causing a computer to execute: displaying, based on a captured image obtained by capturing an image of a first object, a first image in which a contour of the first object is indicated by a line (image capture device operable to capture two or more images of the real-world object when the real-world object is placed within a field of view of the image capture device, e.g. on a suitable object support, wherein the two or more images are taken from different viewpoints relative to the real-world object. Each image may be a picture or another form of two-dimensional representation of a field of view of the image capture device which representation allows the determination of a shape, para. 0023; detect predetermined shapes, e.g. ellipses, polygons, etc. in the current view and highlight the detected shapes, para. 0164); displaying an editing image in which a content image indicating content is superimposed on at least a part of the first image (create a real-world toy construction model or otherwise select or create a real-world object resembling an asset to be used as a virtual object, para. 0016; Colors on a creature are recognized and corresponding animation targets are overlaid over the regions, para. 0215); receiving input for changing at least one of a position, a shape, and a size of the content image in the editing image (assigning one or more local attributes to the selected part of the virtual object, para. 0075); and changing the at least one of the position, the shape, and the size of the content image based on the input for changing (functionality that allows the user to manipulate the view of the created virtual object 301, e.g. by rotating the object, by zooming, etc., para. 0164). Regarding claim 2, Walker et al. discloses the recording medium recording the program according to claim 1, wherein the input for changing includes input for selecting the first image, and the changing includes displaying the content image superimposed on the first image in a shape close to a shape of the first image when the input for selecting the first image is received (the user (or the process) may select two virtual objects to be combined and the process may highlight the available connection points on both virtual objects, para. 0234). Regarding claim 3, Walker et al. discloses the recording medium recording the program according to claim 1, wherein the captured image includes images of the first object and a second object, the displaying the first image includes displaying the first image and a second image indicating a contour of the second object with a line, the input for changing includes input for selecting the first image or the second image, and the changing includes displaying the content image superimposed on the first image in a shape close to a shape of the first image when the input for selecting the first image is received (first and second real-world objects may be a respective toy construction model being constructed from respective pluralities of toy construction elements, each toy construction element comprising one or more coupling members configured for detachably interconnecting the toy construction elements with each other. One or both of the obtained digital representations may be indicative of a visual appearance of the respective real-world objects. The created virtual object may have a visual appearance of a likeness of the second real-world object or of a combination of the first and second real-world objects, e.g. of a combined toy construction model constructed by attaching the first and second real-world toy construction models to each other, para. 0047); and displaying the content image superimposed on the second image in a shape close to a shape of the second image when the input for selecting the second image is received (via display 120 of multiple real-world objects, para. 0228). Regarding claim 4, Walker et al. discloses the recording medium recording the program according to claim 1, wherein the input for changing includes input for deforming a contour of the content image, and the changing includes: deforming the contour of the content image when the input for deforming is received (evolution of virtual objects often involves a change of appearance of the virtual objects. When inserting virtual objects into a game based on captured images or other representations of user-constructed toy models it may be desirable that the user does not need to completely re-build a model and convert it into a virtual object too many times even though the user may actually only make minor adjustments to a virtual object, e.g. during a game level, while retaining attributes of the original virtual object. A more enjoyable flow would be to build a model at the beginning of the level and then make small adjustments, e.g., so as to overcome challenges throughout the level. A rebuild of a model can mean that toy construction elements are added, taken away, and/or rearranged, para. 0240); and displaying the content image having the contour deformed by the deforming and superimposed on the first image (replaces the part of the 3D digital representation of the toy construction model that has been recognized as corresponding to the known toy construction element with the corresponding 3D representation of the known toy construction element as retrieved from the library 1325. This replacement is schematically illustrated in the bottom right of FIG. 10. The thus modified 3D representation generated in step S1304 may then be used as a 3D representation of a virtual object in a virtual environment, para. 0205-0206). Regarding claim 5, Walker et al. discloses the recording medium recording the program according to claim 1, the program further causing the computer to execute: displaying a user interface image for selecting one of a first display mode and a second display mode (transformation or transition between the states may be triggered by a game event, e.g. by a user input, para. 0058); displaying the editing image when the first display mode is selected (transformation or transition between the states may be triggered by a game event, e.g. by a user input, para. 0058); and displaying the content image superimposed on at least a part of the captured image when the second display mode is selected (modified 3D representation generated in step S1304 may then be used as a 3D representation of a virtual object in a virtual environment, para. 0206). Regarding claim 6, Walker et al. discloses the recording medium recording the program according to claim 1, the program further causing the computer to execute, when the first image is selected or when the first image is focused, highlighting the first image (process has highlighted the detected ellipse by drawing an emphasized ellipse 303 in a predetermined color, e.g. in red, para. 0164). Regarding claim 7, Walker et al. discloses a content editing method comprising: displaying, based on a captured image obtained by capturing an image of a first object, a first image in which a contour of the first object is indicated by a line (image capture device operable to capture two or more images of the real-world object when the real-world object is placed within a field of view of the image capture device, e.g. on a suitable object support, wherein the two or more images are taken from different viewpoints relative to the real-world object. Each image may be a picture or another form of two-dimensional representation of a field of view of the image capture device which representation allows the determination of a shape, para. 0023; detect predetermined shapes, e.g. ellipses, polygons, etc. in the current view and highlight the detected shapes, para. 0164); displaying an editing image in which a content image indicating content is superimposed on at least a part of the first image (create a real-world toy construction model or otherwise select or create a real-world object resembling an asset to be used as a virtual object, para. 0016; Colors on a creature are recognized and corresponding animation targets are overlaid over the regions, para. 0215); receiving input for changing at least one of a position, a shape, and a size of the content image in the editing image (assigning one or more local attributes to the selected part of the virtual object, para. 0075); and displaying at least one of the position, the shape, and the size of the content image in a state in which at least one of the position, the shape, and the size of the content image is changed based on the input for changing (functionality that allows the user to manipulate the view of the created virtual object 301, e.g. by rotating the object, by zooming, etc., para. 0164). Regarding claim 8, Walker et al. discloses an information processing device comprising: an input device (paras. 0110, 0138); a display device (e.g., display 103, 503, 1203); and a processing device (a processor, such as a CPU, of the data processing system for execution a processor, such as a CPU, of the data processing system for execution, para. 0110) programmed to execute: displaying, by the display device, based on a captured image obtained by capturing an image of a first object, a first image in which a contour of the first object is indicated by differentiating colors of an inside and an outside of the contour from each other (modification of one or more colors, para. 0028; detection of colors and display of features associated with the colors, para. 0213. See also para. 0215); displaying, by the display device, an editing image in which a content image indicating content is superimposed on at least a part of the first image (create a real-world toy construction model or otherwise select or create a real-world object resembling an asset to be used as a virtual object, para. 0016; Colors on a creature are recognized and corresponding animation targets are overlaid over the regions, para. 0215); receiving, via the input device, input for changing at least one of a position, a shape, and a size of the content image in the editing image (assigning one or more local attributes to the selected part of the virtual object, para. 0075); and changing the at least one of the position, the shape, and the size of the content image based on the input for changing (functionality that allows the user to manipulate the view of the created virtual object 301, e.g. by rotating the object, by zooming, etc., para. 0164). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J LETT/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 08, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602714
LIGHTING AND INTERNET OF THINGS DESIGN USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12570401
Robot and Unmanned Aerial Vehicle (UAV) Systems for Cell Sites and Towers
2y 5m to grant Granted Mar 10, 2026
Patent 12567217
SMART CONTENT RENDERING ON AUGMENTED REALITY SYSTEMS, METHODS, AND DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561867
SYSTEMS AND METHODS FOR AUTOMATICALLY ADDING TEXT CONTENT TO GENERATED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12555276
Image Generation Method and Apparatus
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
47%
With Interview (-36.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 719 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month