Prosecution Insights
Last updated: April 19, 2026
Application No. 18/813,067

INFORMATION PROCESSING DEVICE AND STORAGE MEDIUM STORING COMPUTER PROGRAM

Non-Final OA §103
Filed
Aug 23, 2024
Examiner
SZE, BRIANA
Art Unit
2614
Tech Center
2600 — Communications
Assignee
BANDAI CO., LTD.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
1 currently pending
Career history
1
Total Applications
across all art units

Statute-Specific Performance

§103
100.0%
+60.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-137375, filed on 08/25/2023. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference characters "ROM 130" and "ROM 103" have both been used to designate ROM 103. Reference character “RAM 120” and “RAM 102” have both been used to designated RAM 102. Reference character “Detection unit 157” and “Detection unit 158” have both been used to designate detection 157. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 12 objected to because of the following informalities: an. Appropriate correction is required in line 2 of claim 12: a. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: scan controller, data generator, and video generator in claim 1. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Allowable Subject Matter Claim 5-11 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: In regards to dependent claim 5, none of the cited prior art alone or in combination provides motivation to teach wherein the first object is installed in a first installation unit, wherein the second object is installed in a second installation unit different from the first installation unit, and wherein at least a part of the first object is arranged at a position higher than the second installation unit by the first installation unit, in conjunction with the features of claim 2 with which it depends for the first object is disposed in a center of the table, and wherein the second object is disposed near an end of the table. In addition, there is no teaching, suggestion, or motivation found in the current references and non that can be inferred form the examiner’s own knowledge with respect to the current limitation. In regards to dependent claims 7-11, these claims depend from an objected to claim, and thus are objected to based on the same rationale as provided above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-4, 12-18 is/are rejected under 35.U.S.C. 103 as being unpatentable over Ooi (WO2022137907A1) in view of Kiyomiya (US20250182374A1). Regarding Claim 1, Ooi teaches an information processing device comprising: a scan controller configured to cause a scanner to scan an outer appearance of an object disposed on a table and generate a scan image. Para [0032] discloses “the 3D scan of the subject SU is performed using a 3D scanner SC (interpreted as a scan controller). The 3D scanner SC has, for example, a plurality of measuring support columns 12 arranged in a ring around the object SU”. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject model is generated (interpreted as scan an outer appearance of an object and generate a scan image) that includes the geometry information and texture information of the subject SU.”; a data generator configured to generate 3-dimensional model data from the scan image. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject is generated model (interpreted as 3D model generated from scan image) that includes the geometry information and texture information of the subject SU.”; a video generator configured to generate a video in which a 3-dimensional model based on the 3-dimensional model data is disposed in a virtual space. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject is generated model that includes the geometry information and texture information of the subject SU” (interpreted as 3D model based on 3D data). Para [0039] “Returning to Figure 5, the rendering unit 35 (interpreted as video generator) obtains viewpoint information regarding the virtual viewpoint VP from the video producer or viewer AD. The rendering unit 35 renders the volumetric model VM and the pose model PM based on viewpoint information to generate a shadowed image as seen from a virtual viewpoint” (interpreted as disposed in a virtual space); and a receiver configured to receive an input of attribute information regarding the object that is a scanning target. Para [0014] “Geometry information is information that shows the 3D shape of the subject SU. Geometry information is obtained, for example, as polygon data or voxel data. Texture information refers to information that indicates the color, pattern, and texture of the subject object (SU)”. Para [0031] “For example, the pose generation unit 34 (interpreted as a receiver) acquires scan data SD of the subject SU obtained by 3D scanning the subject SU before shooting. The scan data SD includes geometry information and texture information of the subject SU (geometry and texture information interpreted as attribute information regarding the object that is the scanning target regarding subject SU). The pose generation unit 34 generates a pose model PM using the scan data SD and posture PO)”, Wherein the video generator is configured to generate the video so that an effect is included for each object in accordance with the attribute information. “Para [0014] Geometry information is information that shows the 3D shape of the subject SU. Geometry information is obtained, for example, as polygon data or voxel data. Texture information refers to information that indicates the color, pattern, and texture of the subject object (SU)”. Para [0031] “For example, the pose generation unit 34 acquires scan data SD of the subject SU obtained by 3D scanning the subject SU before shooting. The scan data SD includes geometry information and texture information of the subject SU (geometry and texture information interpreted as attribute information regarding the object that is the scanning target regarding subject SU). The pose generation unit 34 generates a pose model PM using the scan data SD and posture PO).” Ooi does not explicitly disclose, but Kiyomiya teaches an object disposed on a table. Para [0023] “The turntable 130 is a rotation device that is rotatable in a state where the toy body is placed under the control of the information processing device 100. In the present embodiment, after the scanner 110 is positioned at any imaging position and imaging angle by the robot arm 120, the turntable 130 is rotated one round to perform scanning” (interpreted as scanning an object disposed on a table). Ooi does not explicitly disclose, but Kiyomiya teaches an object disposed on a table (Para [0023] The turntable 130 is a rotation device that is rotatable in a state where the toy body is placed under the control of the information processing device 100. In the present embodiment, after the scanner 110 is positioned at any imaging position and imaging angle by the robot arm 120, the turntable 130 is rotated one round to perform scanning) (interpreted as scanning an object disposed on a table)). Regarding claim 2-4, Ooi does not explicitly teach the information processing device according to claim 1, wherein the object includes a first object and a second object, wherein the first object is disposed in a center of the table, and wherein the second object is disposed near an end of the table. The information processing device according to claim 2, wherein the second object is an attachment of the first object. The first object is a humanoid model and the second object is an attachment component of the model. However, Kiyomiya teaches the object includes a first object and a second object, wherein the first object is disposed in a center of the table, and wherein the second object is disposed near an end of the table, and the second object is an attachment of the first object or a component of the model. “FIG. 3 is a diagram showing an example of an appearance of a fastener that fixes the toy body according to the embodiment” [0010] (Fastener interpreted as a second object or an attachment of the first object and toy body interpreted as the first object). The first object is a humanoid model. “A toy body 200 is a toy body having a doll-like (a robot, or a human) appearance” [0032] (Toy body is interpreted as humanoid model). Regarding claim 12, the information processing device according to claim 2, wherein the first object and the second object are installed so that front faces thereof are oriented in a same direction. The information processing device according to claim 2, wherein the first object and the second object are installed so that front faces thereof are oriented in a same direction. “In subsequent step S503, the user of the information processing device 100 selects the fastener 300 in accordance with the displayed information on the fastener, and attaches the fastener 300 to the toy body 200. A posture of the toy body 200 can be fixed by the fastener 300. Thereafter, in S504, the user attaches the toy body 200 to a fixing member on a turntable” [0047] (Interpreted as fastener attaches to first object from where the front face of first object is). Regarding claim 13, Kiyomiya teaches the attribute information includes information of a type of the first object and a type of the second object “a determination unit configured to determine a type of the object; an obtaining unit configured to obtain frame data, the frame data corresponding to the determined type of the object” [0029] (attribute information interpreted as frame data from determination and obtaining unit). Regarding Claim 14, Ooi teaches the attribute information further includes information regarding color used for an effect of at least one of the first object and the second object. “The volumetric model VM includes, for example, geometry information, texture information, and depth information of the subject SU… Texture information refers to information that indicates the color, pattern, and texture of the subject object SU” [0014]), and in the [0029] discloses the "volumetric model generation unit 32 generates a volumetric model VM of the subject SU for each frame based on the image data of the subject SU" and "the volumetric model generation unit 32 generates a volumetric model VM of the subject SU based on the detected geometry information, texture information, and depth information", which shows the subject or first object rendered as a volumetric model would be display of the effect with the color. Regarding claim 15, Ooi teaches the video generator is configured to include an effect video related to the second object based on the attribute information. “The viewpoint information includes information regarding a virtual viewpoint for viewing the subject SU… The viewpoint information is input by the video producer or the viewer AD” [0015]) and “The rendering unit 35 acquires viewpoint information regarding the virtual viewpoint VP from the video producer or viewer AD. The rendering unit 35 renders the volumetric model VM and the avatar model AM based on viewpoint information” [0039]. Rendering unit is interpreted as the receiver and the viewpoint information is interpreted as the attribute information. Regarding claim 16, Ooi teaches the scan controller is configured to control a position and an angle of the scanner in accordance with the attribute information of the type of the first object “the plurality of cameras 10 are arranged so as to surround the periphery of the shooting space SS including the subject SU. The mounting position and mounting direction of the plurality of cameras 10” [0012] (scan controller interpreted as camera). Regarding claim 17, Kiyomiya teaches the motion controller is configured to rotate the table clockwise and counterclockwise. “In the example of FIG. 8, when the scan information 811 is shifted to an upper side in (B) of FIG. 8 and is rotated in a counterclockwise direction, the scan information 811 is moved to a lower side by a cross key button and is rotated by a rotation button in a clockwise direction” [0055] (The scan information is interpreted as the motion controller). Regarding claim 18, Ooi teaches an information processing device comprising: a scan controller configured to cause a scanner to scan an outer appearance of an object disposed on a table and generate a scan image. Para [0032] discloses “the 3D scan of the subject SU is performed using a 3D scanner SC (interpreted as a scan controller). The 3D scanner SC has, for example, a plurality of measuring support columns 12 arranged in a ring around the object SU”. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject model is generated (interpreted as scan an outer appearance of an object and generate a scan image) that includes the geometry information and texture information of the subject SU.”; a data generator configured to generate 3-dimensional model data from the scan image. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject is generated model (interpreted as 3D model generated from scan image) that includes the geometry information and texture information of the subject SU.”; a video generator configured to generate a video in which a 3-dimensional model based on the 3-dimensional model data is disposed in a virtual space. Para [0034] “The 3D scan is performed on the subject SU, which is dressed in the same way as when it was photographed by camera 10 (for generating the virtual viewpoint image VI). Based on the shooting data from multiple cameras 13, a subject is generated model that includes the geometry information and texture information of the subject SU” (interpreted as 3D model based on 3D data). Para [0039] “Returning to Figure 5, the rendering unit 35 (interpreted as video generator) obtains viewpoint information regarding the virtual viewpoint VP from the video producer or viewer AD. The rendering unit 35 renders the volumetric model VM and the pose model PM based on viewpoint information to generate a shadowed image as seen from a virtual viewpoint” (interpreted as disposed in a virtual space); and a receiver configured to receive an input of attribute information regarding the object that is a scanning target. Para [0014] “Geometry information is information that shows the 3D shape of the subject SU. Geometry information is obtained, for example, as polygon data or voxel data. Texture information refers to information that indicates the color, pattern, and texture of the subject object (SU)”. Para [0031] “For example, the pose generation unit 34 (interpreted as a receiver) acquires scan data SD of the subject SU obtained by 3D scanning the subject SU before shooting. The scan data SD includes geometry information and texture information of the subject SU (geometry and texture information interpreted as attribute information regarding the object that is the scanning target regarding subject SU). The pose generation unit 34 generates a pose model PM using the scan data SD and posture PO)”, Wherein the video generator is configured to generate the video so that an effect is included for each object in accordance with the attribute information. “Para [0014] Geometry information is information that shows the 3D shape of the subject SU. Geometry information is obtained, for example, as polygon data or voxel data. Texture information refers to information that indicates the color, pattern, and texture of the subject object (SU)”. Para [0031] “For example, the pose generation unit 34 acquires scan data SD of the subject SU obtained by 3D scanning the subject SU before shooting. The scan data SD includes geometry information and texture information of the subject SU (geometry and texture information interpreted as attribute information regarding the object that is the scanning target regarding subject SU). The pose generation unit 34 generates a pose model PM using the scan data SD and posture PO).” Ooi does not explicitly disclose, but Kiyomiya teaches an object disposed on a table. Para [0023] “The turntable 130 is a rotation device that is rotatable in a state where the toy body is placed under the control of the information processing device 100. In the present embodiment, after the scanner 110 is positioned at any imaging position and imaging angle by the robot arm 120, the turntable 130 is rotated one round to perform scanning” (interpreted as scanning an object disposed on a table). Ooi and Kiyomiya are combinable because they are in the same field of endeavor regarding 3D scanning. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the application to combine the scan controller, data generator, video generator, and receiver of Ooi with an object disposed on a table of Kiyomiya in order to obtain information on an appearance of an object with high accuracy and make the information available in a virtual space (Kiyomiya, [0007]). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. IMAI, US20160044300A1. Examiner believes this reference is pertinent because it discloses a video generator configured to generate a video in which a 3-dimensional model based on the 3-dimensional model data is disposed in a virtual space. “The video distribution system is a system that generates and distributes a virtual viewpoint video” [0023]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIANA SZE whose telephone number is (571)272-9916. The examiner can normally be reached Monday-Thursday 6am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.S./Examiner, Art Unit 2614 /TERRELL M ROBINSON/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Apr 02, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month