Prosecution Insights
Last updated: April 19, 2026
Application No. 18/834,625

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§102§103§112
Filed
Jul 31, 2024
Examiner
RENZE, GEORGE NICHOLAS
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “acquisition unit” in claim 1, “generation unit” in claims 1, 2 and 9, “output unit” in claim 3, “use unit” in claim 4, “determination unit” in claims 5-8 and “importance calculation unit” in claims 9 and 11-16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation of "at least one of information related to a user who views free viewpoint content" in lines 4 through 5 and “information related to the free viewpoint content” in lines 5 through 6. There is insufficient antecedent basis for this limitation in the claim in relation to information related to the user (who views free viewpoint content) and information related to the free viewpoint content. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim recites a program and the body of the claim recites computer program steps, which are nothing more than just programmed instructions to be performed by a system. Therefore, the steps/elements recited in claim 20 are non-statutory because a computer program per se, i.e., the descriptions or expressions of the program, are not physical “things” and thus do not fall into either a process, machine, manufacture and/or composition of matter category and makes it ineligible subject matter under 35 USC § 101. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer, which permits the computer program’s functionality to be realized, and is thus statutory. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-9, 17 and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Aoki et al. (Pub. No.: US 2023/0224445 A1), hereinafter Aoki. Regarding claim 1, Aoki discloses an information processing apparatus (FIG. 2 and paragraph 121 teach that as shown in FIG. 2 as an example, the data processing apparatus 10 includes a computer 14 and a communication OF 16. The computer 14 includes a CPU 14A, an NVM 14B, and a RAM 14C, and the CPU 14A, the NVM 14B, and the RAM 14C are connected to each other via a bus 20.) comprising: an acquisition unit configured to acquire at least one of information related to a user who views free viewpoint content and information related to the free viewpoint content (Paragraph 185 teaches that in the first embodiment, the data processing apparatus 10 transmits a plurality of virtual viewpoint videos 30 generated on the basis of the captured video 34 to the user device 12. In a case where the video reproduction request 52 is received from the user device 12, the video reproduction unit 28 acquires information regarding the virtual viewpoint video 30 of which at least a part has been transmitted to the user device 12 that is a transmission source of the video reproduction request 52 among the plurality of virtual viewpoint videos 30 from the reproduced video list 64. On the basis of the acquired information, the video reproduction unit 28 generates information regarding the virtual viewpoint video 30 (untransmitted virtual viewpoint video 30) that has not been transmitted to the user device 12 that is a request source among the plurality of virtual viewpoint videos 30 as the recommendation data 32. The video reproduction unit 28 transmits the generated recommendation data 32 to the user device 12 that is a request source.); and a generation unit configured to generate a viewing time and a viewing position of the free viewpoint content based on the at least one of the information related to the user and the information related to the free viewpoint content (Paragraph 125 teaches that the video generation unit 26 generates the virtual viewpoint video 30 on the basis of a captured video 34 (refer to FIG. 4) obtained by imaging an imaging region from a plurality of directions by using a plurality of imaging devices (not shown) in response to a request from each user device 12. Additionally, paragraph 133 teaches an example of the video generation process performed by the video generation unit 26 will be specifically described below with reference to FIG. 4. In a case where the video generation request 50 is received from the user device 12, the video generation unit 26 generates a viewpoint related information reception screen 54. The video generation unit 26 transmits the generated viewpoint related information reception screen 54 to the user device 12 that is an output source of the video generation request 50. Paragraph 134 teaches that the viewpoint related information reception screen 54 is a screen used for receiving viewpoint related information 58 related to a viewpoint of the virtual viewpoint video 30. In the first embodiment, the viewpoint related information 58 is data including a viewpoint position, a visual line direction, and an angle of view with respect to a virtual viewpoint. The viewpoint related information 58 is an example of “viewpoint related information” according to the technology of the present disclosure. Lastly, paragraph 141 teaches the game elapsed time 62C is a time calculated on the basis of an imaging time of the captured video 34 used to generate the virtual viewpoint video 30. Specifically, in the captured video 34 used to generate the virtual viewpoint video 30, a value obtained by subtracting a game start time from an imaging time of a captured image of a first frame is a start time of the game elapsed time 62C, and a value obtained by subtracting the game start time from an imaging time of a captured image of a last frame is an end time of the game elapsed time 62C. The game elapsed time 62C is an example of “first time information” according to the technology of the present disclosure.). Regarding claim 2, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses wherein the generation unit generates the viewing time indicating a length shorter than a temporal length of the free viewpoint content (Paragraph 149 teaches that the video reproduction unit 28 compares the game elapsed time 62C of the untransmitted virtual viewpoint video 30 with the game elapsed time 62C (hereinafter, referred to as a “reference game elapsed time”) of the virtual viewpoint video 30-001 recorded in the reproduced video list 64. The video reproduction unit 28 moves the untransmitted virtual viewpoint video 30 of which the game elapsed time 62C overlaps the reference game elapsed time over a predetermined value (for example, several tens of seconds) or more to the lowest position of the generated video list 60. That is, the video reproduction unit 28 moves the untransmitted virtual viewpoint video 30 having the game elapsed time 62C having a high similarity with the reference game elapsed time to the lowest position in the generated video list 60. For example, in the first embodiment, the game elapsed time 62C of the virtual viewpoint video 30-003 overlaps the reference game elapsed time for 25 seconds. Therefore, the video reproduction unit 28 moves the virtual viewpoint video 30-003 to the lowest position of the generated video list 60. Additionally, FIG. 10 and paragraph 152 teach that as shown in FIG. 10 as an example, the recommendation data 32 is displayed on the recommendation screen 70. The recommendation screen 70 displays the video information 62 regarding the five untransmitted virtual viewpoint videos 30. ... The length 62D of the corresponding untransmitted virtual viewpoint video 30 is disposed at the lower right of each thumbnail image 62E. The author 62B, the video ID 62A, and the game elapsed time 62C are disposed on the right side of the corresponding thumbnail image 62E.). Regarding claim 3, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses further comprising an output unit configured to output the viewing time and the viewing position to an outside via a predetermined network (Paragraph 139 teaches that the video generation unit 26 transmits the generated virtual viewpoint video 30 to the user device 12 that is an output source of the video generation request 50. The user device 12 receives the virtual viewpoint video 30 and displays the received virtual viewpoint video 30 on the touch panel display 12A. The video generation unit 26 adds the viewpoint related information 58 and the video information 62 regarding the transmitted virtual viewpoint video 30 to a reproduced video list 64 corresponding to the account of the user who uses the user device 12 to which the virtual viewpoint video 30 has been transmitted among a plurality of reproduced video lists 64 stored in the database 22. Additionally, FIG. 54 and paragraph 326 teach that the video reproduction unit 28 can output the acquired viewer data 140A and number of times of viewing 140B to an account associated with the administrator of the data processing apparatus 10. The video reproduction unit 28 outputs the viewing data 140 in a table form shown in FIG. 54, for example. [0327] According to the configuration shown in FIG. 54, the administrator of the data processing apparatus 10 can ascertain to which user device 12 the virtual viewpoint video 30 is transmitted in association with the viewpoint related information 58 of the virtual viewpoint video 30. Lastly, paragraph 127 teaches that the communication I/F 16 is communicatively connected to the user device 12 via a communication network.). Regarding claim 4, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses further comprising a use unit configured to generate a digest video of the free viewpoint content based on the viewing time and the viewing position (Paragraph 281 teaches that alternatively, the video 106 may be a video different from the virtual viewpoint video 30 or the captured video 34. In the example shown in FIG. 41, the video 106 is reproduced on the user device 12 each time the reproduction of each of the five virtual viewpoint videos 30 is ended. However, the technology of the present disclosure is not limited to this, and the video 106 may be reproduced between reproductions of at least two virtual viewpoint videos 30. Examples of the video 106 include a replay video or a digest video of the virtual viewpoint video 30 that is reproduced immediately before the video 106 is reproduced. Additionally, paragraph 301 teaches that as shown in FIG. 48 as an example, in a case where the video generation unit 26 receives a replay video generation instruction 116 for the virtual viewpoint video 30 that is being reproduced on the user device 12 from the user device 12, the video generation unit 26 may generate a replay video 120 on the basis of the viewpoint related information 58 based on the viewpoint related information 58 regarding the virtual viewpoint video 30 that is being reproduced. Lastly, paragraph 302 teaches that the replay video generation instruction 116 includes the game elapsed time 62C corresponding to a timing at which the replay video generation instruction 116 is received in the virtual viewpoint video 30 that is being reproduced, and the viewpoint position path 58D and the gaze point 58E related to the virtual viewpoint video 30 that is being reproduced.). Regarding claim 5, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses further comprising a determination unit configured to determine a viewpoint position and a viewing direction in the free viewpoint content, the viewpoint position and the viewing direction being used in generating a video at the viewing position (FIG. 8 and paragraph 148 teach that as shown in FIG. 8 as an example, the video reproduction unit 28 calculates, for the untransmitted virtual viewpoint videos 30 having an identical viewpoint position difference, a difference (hereinafter, a “visual line direction difference”) between the visual line direction 58B of each untransmitted virtual viewpoint video 30 and the visual line direction 58B (hereinafter, a “reference visual line direction”) of the virtual viewpoint videos 30-001. The video reproduction unit 28 rearranges the untransmitted virtual viewpoint videos 30 in the generated video list 60 in ascending order of difference in visual line direction, that is, in descending order of similarity with the reference visual line direction, for the untransmitted virtual viewpoint videos 30 having an identical viewpoint position difference. Additionally, paragraph 182 teaches that in the first embodiment, the viewpoint related information 58 includes the viewpoint position 58A, the visual line direction 58B, and the angle of view 58C. Therefore, according to the present configuration, compared with a case where the viewpoint related information 58 includes information not correlated with any of the viewpoint position 58A, the visual line direction 58B, and the angle of view 58C, a user can easily select the virtual viewpoint video 30 correlated with at least one of the viewpoint position 58A, the visual line direction 58B, and the angle of view 58C among the untransmitted virtual viewpoint videos 30. ). Regarding claim 6, Aoki discloses everything claimed as applied above (see claim 5), in addition, Aoki discloses wherein the determination unit sets the viewpoint position in an area where the viewing position is not blocked by one or more objects in the free viewpoint content when the viewing position is viewed the viewpoint position (Paragraph 308 teaches that the video generation unit 26 receives the viewing refusal subject information 122 from the user device 12. The video generation unit 26 specifies a player captured in the object image 124 by using a well-known image recognition technology or the like on the basis of the object image 124 included in the viewing refusal subject information 122. The video generation unit 26 determines the viewpoint related information 58 in which the virtual viewpoint video 30 in which the specified player is not captured is generated. In this case, the viewpoint related information 58 to be determined has the gaze point 58E identical to the gaze point 58E included in the viewing refusal subject information 122, and has the viewpoint position path 58D different from that of the virtual viewpoint video 30 that is being reproduced. Additionally, paragraph 311 teaches that according to the configuration shown in FIG. 49, the user can view the virtual viewpoint video 30 that does not include a subject that the user does not want to view. In the example shown in FIG. 49, the object image 124 is an image of a player who the user does not want to view, but the technology of the present disclosure is not limited to this, and the object image 124 may be any object image in the soccer field including an advertising sign, a goal, a line, and the like.). Regarding claim 7, Aoki discloses everything claimed as applied above (see claim 6), in addition, Aoki discloses wherein the determination unit sets the viewpoint position in the area where the viewing position is not blocked by the one or more objects when the viewing position is viewed from the viewpoint position in a plurality of consecutive frames (Paragraph 187 teaches that the viewpoint position path indicates displacement of a viewpoint position over time. The gaze point is a gaze position in the imaging region, and in the example shown in FIG. 15, the gaze point is set to a face of a player of interest, but the technology of the present disclosure is not limited to this, and the gaze point may be a position of another object in the imaging region, for example, a position of a ball, a position of a goal, or a position of a specific line. The gaze point may be any position within the imaging region that does not include an object. The angle of view is an angle of view in a case of observing the gaze point from each viewpoint position included in the viewpoint position path.). Regarding claim 8, Aoki discloses everything claimed as applied above (see claim 5), in addition, Aoki discloses wherein the determination unit determines one or more of the viewpoint positions as a next viewpoint position, and then determines, from the one or more viewpoint positions determined, a viewpoint position having a shortest distance from an immediately preceding viewpoint position as the next viewpoint position (Paragraph 303 teaches that the video generation unit 26 receives the replay video generation instruction 116 from the user device 12. The video generation unit 26 determines the new viewpoint position path 58D different from the viewpoint position path 58D related to the virtual viewpoint video 30 that is being reproduced, on the basis of the viewpoint position path 58D included in the replay video generation instruction 116. The video generation unit 26 generates the replay video 120 that is a new virtual viewpoint video by using the captured video 34 captured between a few seconds to a few tens of seconds before the game elapsed time 62C included in the viewpoint position change instruction 114 and the game elapsed time 62C on the basis of the determined new viewpoint position path 58D and the gaze point 58E identical to the gaze point 58E included in the replay video generation instruction 116. The video generation unit 26 stores the generated replay video 120 in the database 22.). Regarding claim 9, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses further comprising an importance calculation unit configured to calculate an importance level of the free viewpoint content for each time zone based on the information related to the free viewpoint content (FIG. 34 and paragraph 260 teach that according to the configuration shown in FIG. 34, the user can easily select the virtual viewpoint video 30 intended by the user on the basis of the length of the viewpoint position path 58D with respect to the video length 62D, compared with a case where the recommendation data 32 includes the information regarding the virtual viewpoint video 30 randomly selected. In the example shown in FIG. 34, the video reproduction unit 28 estimates the user's preference by comparing a viewpoint position change ratio with a predetermined value, but the technology of the present disclosure is not limited to this. The video reproduction unit 28 may estimate the user's preference by comparing the calculated viewpoint position change ratio with the average value, the most frequent value, or the like of the viewpoint position change ratio.), wherein the generation unit generates the viewing time and the viewing position based on the importance level (FIG. 34 and paragraph 260 teach that in the example shown in FIG. 34, the video reproduction unit 28 generates the recommendation data 32 on the basis of the length of the viewpoint position path 58D with respect to the video length 62D, but the technology of the present disclosure is not limited to this. The video reproduction unit 28 may generate the recommendation data 32 on the basis of a length of the gaze point path with respect to the video length 62D.). Regarding claim 17, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses wherein the information related to the user includes at least one of age, sex, hobby, and preference (Paragraph 230 teaches that in the example shown in FIG. 26, attribute data 64B of the user A indicating “Japan” is recorded in the reproduced video list 64 corresponding to the account of the user A. ... The attribute data may be registered when the user registers the account, or may be included in the video reproduction request 52 received from the user device 12. In addition to the team name that the user likes, the attribute data includes a user's sex, age, favorite player name, and the like. The attribute data is an example of “attribute data” according to the technology of the present disclosure.). Regarding claim 19, the method steps correlate to and are rejected similarly to the information processing apparatus steps of claim 1. In addition, Aoki discloses an information processing method executed by an information processing apparatus that provides a viewing service of free viewpoint content to a user terminal connected via a predetermined network (Paragraph 52 teaches that according to a forty-sixth aspect of the technology of the present disclosure, there is provided a data processing method of transmitting a virtual viewpoint image generated on the basis of a captured image to a device, the data processing method including acquiring first data regarding a reproduction history and/or registration data of the virtual viewpoint image, and performing control for transmitting second data regarding the virtual viewpoint image to the device on the basis of the acquired first data.). Regarding claim 20, the program steps correlate to and are rejected similarly to the information processing apparatus steps of claim 1. In addition, Aoki discloses a program for causing a processor to function (FIG. 57 and paragraph 111 teach that FIG. 57 is a conceptual diagram showing an example of an aspect in which an operation program stored in a storage medium is installed in a computer of a data processing apparatus.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 10, 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Aoki in view of Itakura et al (Pub. No.: US 2020/0126290 A1), hereinafter Itakura. Regarding claim 10, Aoki discloses everything claimed as applied above (see claim 9), in however, Aoki fails to disclose wherein the importance level includes a first importance level for each position. Itakura discloses wherein the importance level includes a first importance level for each position (Paragraph 71 teaches that in the second embodiment, at the time of determining the above-described effectiveness degree in each image capturing apparatus, a position weight of an object is used, in addition to the beam angle. The position weight is a weight for each captured image, which is set in accordance with the position of the object located in a direction of interest within a visual field of the image capturing apparatus.). Since Aoki teaches an information processing apparatus that can capture three-dimensional (free viewpoint) content from a plurality of different views and arrange the content in multiple ways for a user by applying different recommendation calculations using assorted and different weights and Itakura teaches an information processing apparatus that can capture three-dimensional content and specifically apply a position weight of importance to any object of interest within the captured content, it would have been obvious to a person having ordinary skill in the art to combine the features together so that any of the assorted weight/importance factors being applied to the free viewpoint content information data, could also then include applying a position importance weight as well. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aoki to incorporate the teachings of Fujita, so that the combined features would provide the recommendation calculation weights to include position weights which would help improve and better recognize key viewpoints for a user to take advantage of when viewing the free viewpoint content recommendations. Furthermore, Aoki in view of Itakura disclose and a second importance level for each time zone in a virtual space represented by the free viewpoint content (FIG. 36 and paragraph 263 of Aoki teach that as shown in FIG. 36 as an example, on the recommendation screen 70 that displays the recommendation data 32 generated on the basis of the average speed of the virtual viewpoint video 30, a seek bar 95 is displayed below the bird's-eye view image 56. In the seek bar 95, a portion where the speed of the virtual viewpoint video 30 is more than a predetermined value (hereinafter, referred to as a “fast-forwarding portion”) is indicated by hatching. It is estimated that a portion where the speed of the virtual viewpoint video 30 is less than the predetermined value (hereinafter, referred to as a “slow-forwarding portion”) is a portion corresponding to a scene of interest such as a good play scene and/or a goal scene. The game elapsed time 62C is displayed to correspond to each slow-forwarding portion. In the viewpoint position path 58D, the fast-forwarding portion and the slow-forwarding portion are displayed in a distinguishable manner.). Regarding claim 11, Aoki in view of Itakura discloses everything claimed as applied above (see claim 10), in addition, Aoki in view of Itakura discloses wherein the importance calculation unit calculates the importance level by adding the first importance level and the second importance level (Paragraph 63 of Itakura teaches that first, a rendering weight W for all the image capturing apparatuses and a sum W.sub.sum of the rendering weights of all the image capturing apparatuses are initialized to 0. Next, the image capturing apparatus of interest is set that is the target for which the rendering weight is determined. Additionally, paragraph 64 of Itakura teaches that by equation (7), the rendering weight for all the image capturing apparatuses is determined in order. The sum of the rendering weights is derived and the rendering weight of the image capturing apparatus whose priority level is after the priority level at the point in time at which the sum becomes larger than 1 is set to 0. By doing so, it is made possible to use the limited number of captured images at the time of rendering.). Regarding claim 13, Aoki in view of Itakura disclose everything claimed as applied above (see claim 10), in addition, Aoki in view of Itakura disclose wherein the importance calculation unit divides the virtual space represented by the free viewpoint content into a plurality of regions, and calculates the first importance level for each of the plurality of regions divided (Paragraph 42 of Itakura teaches that at S320, an image capturing viewpoint information acquisition unit 220 acquires position/orientation information (hereinafter, called image capturing viewpoint information) of a plurality of image capturing apparatuses having captured the multi-viewpoint image data acquired by the image data acquisition unit 210 for each image capturing apparatus. In the present embodiment, the image capturing viewpoint refers to each viewpoint of the plurality of the image capturing apparatuses 501 and the image capturing viewpoint information means information on the image capturing viewpoint. In the image capturing viewpoint information, the position/orientation information on the image capturing apparatus 501 within a predetermined coordinate system and for example, position information on the image capturing apparatus 501 and orientation information indicating the optical axis direction are included. Further, it is also possible to include information relating to the viewing angle of the image capturing apparatus 501, such as the focal length or the main point position of the image capturing apparatus 501, in the image capturing viewpoint information. It is possible to associate each pixel of the captured image and the position of the object existing within the captured image by using these pieces of information. Because of this, it is made possible to specify the corresponding pixel on the captured image and obtain color information thereon for the specific region of the object. Additionally, FIGs 9-10 and paragraph 151 of Aoki teach that as shown in FIG. 9 as an example, in the recommended video list 66, information regarding the untransmitted virtual viewpoint video 30 is recorded in the recommended order. The video reproduction unit 28 reads out the video information 62 regarding the five untransmitted virtual viewpoint videos 30 recorded in the recommended video list 66 from the database 22 in the arrangement order of the untransmitted virtual viewpoint videos 30 recorded in the recommended video list 66. The video reproduction unit 28 generates the recommendation data 32 including the video information 62 regarding the five untransmitted virtual viewpoint videos 30 read out from the database 22, and the video ID 62A, the author 62B, the game elapsed time 62C, the video length 62D, and the disposition information of the thumbnail image 62E included in the video information 62. The video reproduction unit 28 transmits the generated recommendation data 32 to the user device 12. The user device 12 receives the recommendation data 32 and displays the received recommendation data 32 on the touch panel display 12A as a recommendation screen 70 (refer to FIG. 10).). Claims 14-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Aoki in view of Fujita (Pub. No.: US 20240054739 A1). Regarding claim 14, Aoki discloses everything claimed as applied above (see claim 9), in addition, Aoki discloses wherein the information related to the free viewpoint content includes at least one of event data indicating an event that has occurred in the free viewpoint content and a viewing history of the free viewpoint content (Paragraph 131 teaches that on the menu screen 40, a title 42 of an event in which the virtual viewpoint video 30 can be generated and viewed, and the date 44 on which the event is performed are displayed. A generation button 46 and a viewing button 48 are displayed on the menu screen 40. Additionally, paragraph 240 teaches that in the example shown in FIG. 28, information regarding the virtual viewpoint videos 30 having the video IDs 62A of “001” and “003” is recorded in the reproduced video list 64 corresponding to the account of the user A. The virtual viewpoint videos 30 having the video IDs 62A of “001” and “003” are a reproduction history of the virtual viewpoint video 30 of the user A, that is, a viewing history of the user A. The video reproduction unit 28 reads out the reproduced video list 64 corresponding to the account of the user C who has a viewing history similar to the viewing history of the user A.). However, Aoki fails to disclose motion of one or more objects included in the free viewpoint content. Fujita discloses motion of one or more objects included in the free viewpoint content (Paragraph 52 teaches that first, a plurality of image capturing devices 2 capture the image-capturing area from different directions to acquire a plurality of captured images. Then, a foreground image obtained by extracting a foreground region corresponding to an object such as a person or a ball and a background image obtained by extracting a background region other than the foreground region are acquired from the plurality of captured images. The foreground image is an image obtained by extracting a region (foreground region) of an object from a captured image acquired by an image capturing device. The object to be extracted as the foreground region refers to a dynamic object (moving object) that moves (with the position or shape changeable) in images captured in a time-series manner from the same direction.). Since Aoki teaches acquiring particular information related to free viewpoint content, such as data related to different sporting/special events and a user’s viewing history related to sporting/special events and Fujita teaches acquiring motion related data of objects within free viewpoint content of different sporting/special events, it would have been obvious to a person having ordinary skill in the art to combine the features together so that in addition to acquiring free viewpoint content data in relation to specific events, additional motion data of multiple objects within that event could also be acquired and used to influence recommendations for a user in relation to their viewing history. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aoki to incorporate the teachings of Fujita, so that the combined features together would allow for the incorporation of motion and movement data, related to different objects, to be included within the information related to the free viewpoint content, which would help improve the overall user’s experience by improving object movement recognition and help reduce potential motion sickness for the user by having more detailed object motion data. Furthermore, Aoki in view of Fujita disclose and the importance calculation unit calculates the importance level using at least one of the event data, the motion of the one or more objects, and the viewing history (Paragraph 240 of Aoki teaches that the video reproduction unit 28 compares the viewing history of the user A with the viewing history of the user C, and weights, among the virtual viewpoint videos 30 recorded in the generated video list 60, the virtual viewpoint video 30 that is included in the viewing history of the user C and is not included in the viewing history of the user A. That is, in the example shown in FIG. 27, the video reproduction unit 28 weights the virtual viewpoint video 30 having the video ID 62A of “004”. The video reproduction unit 28 rearranges the virtual viewpoint videos 30 in the generated video list 60 in descending order of weight, and stores the generated video list 60 after the rearrangement as the recommended video list 66. The video reproduction unit 28 creates the recommendation data 32 according to the generated recommended video list 66.). Regarding claim 15, Aoki in view of Fujita disclose everything claimed as applied above (see claim 14), in addition, Aoki in view of Fujita disclose wherein the importance calculation unit calculates a third importance level based on each of at least two of the event data, the motion of the one or more objects, and the viewing history, and adds the third importance levels calculated to calculate the importance level (Paragraph 259 of Aoki teaches that in the example shown in FIG. 34, in the reproduced video list 64 corresponding to the account of the user A, the virtual viewpoint video 30 having the video ID 62A of “015” is given a high evaluation 90. The video reproduction unit 28 calculates the viewpoint position change ratio of the virtual viewpoint video 30 having the high evaluation 90, and in a case where the calculated viewpoint position change ratio is less than a predetermined value, it is estimated that the user A prefers the virtual viewpoint video 30 in which a change in the viewpoint position is small. The video reproduction unit 28 weights the virtual viewpoint video 30 in which the viewpoint position change ratio is less than the predetermined value in the generated video list 60. The video reproduction unit 28 rearranges the virtual viewpoint videos 30 in the generated video list 60 in descending order of weight, and stores the generated video list 60 after the rearrangement as the recommended video list 66. The video reproduction unit 28 creates the recommendation data 32 according to the generated recommended video list 66. In a case where the calculated viewpoint position change ratio is more than the predetermined value, the video reproduction unit 28 may weight the virtual viewpoint video 30 in which the viewpoint position change ratio is more than the predetermined value in the generated video list 60.). Regarding claim 18, Aoki discloses everything claimed as applied above (see claim 1), in addition, Aoki discloses wherein the information related to the free viewpoint content includes at least one of event data indicating an event that has occurred in the free viewpoint content, and a viewing history of the free viewpoint content (Paragraph 131 teaches that on the menu screen 40, a title 42 of an event in which the virtual viewpoint video 30 can be generated and viewed, and the date 44 on which the event is performed are displayed. A generation button 46 and a viewing button 48 are displayed on the menu screen 40. Additionally, paragraph 240 teaches that in the example shown in FIG. 28, information regarding the virtual viewpoint videos 30 having the video IDs 62A of “001” and “003” is recorded in the reproduced video list 64 corresponding to the account of the user A. The virtual viewpoint videos 30 having the video IDs 62A of “001” and “003” are a reproduction history of the virtual viewpoint video 30 of the user A, that is, a viewing history of the user A. The video reproduction unit 28 reads out the reproduced video list 64 corresponding to the account of the user C who has a viewing history similar to the viewing history of the user A.). However, Aoki fails to disclose motion of one or more objects included in the free viewpoint content and meta information given to the free viewpoint content. Fujita discloses motion of one or more objects included in the free viewpoint content and meta information given to the free viewpoint content (Paragraph 52 teaches that first, a plurality of image capturing devices 2 capture the image-capturing area from different directions to acquire a plurality of captured images. Then, a foreground image obtained by extracting a foreground region corresponding to an object such as a person or a ball and a background image obtained by extracting a background region other than the foreground region are acquired from the plurality of captured images. The foreground image is an image obtained by extracting a region (foreground region) of an object from a captured image acquired by an image capturing device. The object to be extracted as the foreground region refers to a dynamic object (moving object) that moves (with the position or shape changeable) in images captured in a time-series manner from the same direction. Additionally, paragraph 110 teaches that when a plurality of virtual camera path data sets are acquired, the virtual viewpoint video output unit 807, described below, can distinguish and output respective virtual viewpoint videos corresponding to the virtual camera path data sets. The virtual camera path data sets can be distinguished by identification IDs described in the respective headers of the virtual camera path data sets. The virtual viewpoint video output unit 807, described below, may perform a process for assigning an identification ID described in a virtual camera path data set to metadata of virtual viewpoint video to be output.). Since Aoki teaches acquiring particular information related to free viewpoint content, such as data related to different sporting/special events and a user’s viewing history related to sporting/special events and Fujita teaches acquiring motion related data of objects within free viewpoint content of different sporting/special events, it would have been obvious to a person having ordinary skill in the art to combine the features together so that in addition to acquiring free viewpoint content data in relation to specific events, additional motion data of multiple objects within that event, as well as metadata related to the different viewpoints, could also be acquired and used to influence recommendations for a user in relation to their viewing history. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Aoki to incorporate the teachings of Fujita, so that the combined features together would allow for the incorporation of motion and movement data, related to different objects, as well as the metadata of various different viewpoints, to be included within the information related to the free viewpoint content, which would help improve the overall user’s experience by improving object movement recognition and help reduce potential motion sickness for the user by having more detailed object motion data and providing more detailed viewpoint related metadata as well. Allowable Subject Matter Claims 12 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claim 12 would be allowable for disclosing wherein the importance calculation unit multiplies the first importance level and the second importance level by a weight set in advance, and adds the first importance level with the second importance level each multiplied by the weight to calculate the importance level. The most relevant arts searched do not teach the dependent claim 12 cited limitations of “The information processing apparatus according to claim 11, wherein the importance calculation unit multiplies the first importance level and the second importance level by a weight set in advance, and adds the first importance level with the second importance level each multiplied by the weight to calculate the importance level.” Claim 16 would be allowable for disclosing wherein the importance calculation unit multiplies the third importance level calculated based on each of the at least two of the event data, the motion of the one or more objects, and the viewing history by a weight set for each of the event data, the motion of the one or more objects, and the viewing history, and adds the third importance levels each multiplied by the weight to calculate the importance level. The most relevant arts searched do not teach the dependent claim 16 cited limitations of “The information processing apparatus according to claim 15, wherein the importance calculation unit multiplies the third importance level calculated based on each of the at least two of the event data, the motion of the one or more objects, and the viewing history by a weight set for each of the event data, the motion of the one or more objects, and the viewing history, and adds the third importance levels each multiplied by the weight to calculate the importance level.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Miyata et al. (Pub. No.: US 2023/0396749 A1) teaches and image processing apparatus that can use used to acquire viewpoint positional information related to multiple different virtual viewpoints. Tamura et al. (Pub. No.: US 2023/0085590 A1) teaches an image processing apparatus that acquires specific region information and outputs specific region processed images corresponding to the specific region information Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.R./Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602407
SYSTEMS AND METHODS FOR GENERATING A UNIQUE IDENTITY FOR A GEOSPATIAL OBJECT CODE BY PROCESSING GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12573147
LANDMARK DATA COLLECTION METHOD AND LANDMARK BUILDING MODELING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12555315
HEURISTIC-BASED VARIABLE RATE SHADING FOR MOBILE GAMES
2y 5m to grant Granted Feb 17, 2026
Patent 12530759
System and Method for Point Cloud Generation
2y 5m to grant Granted Jan 20, 2026
Patent 12505508
DIGITAL IMAGE RADIAL PATTERN DECODING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month