DETAILED ACTION
This action is in response to the filing of 1-20-2026. Claims 1-4, 6-14 and 17-18 are pending and have been considered below:
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 7-8, 10-11, 14 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (“Wang” 20140340404 A1) in view of Yee (20190083885 A1) and Tong et al. (“Tong” 11257282 B2).
Claim 1: Wang discloses an image processing apparatus comprising:
and one or more processors executing the instructions to perform a method comprising:
generating, based on a virtual viewpoint and a virtual space, a first virtual viewpoint image by generating a three dimensional model of a first object that is included in images captured by a plurality of image capturing apparatuses, and by arranging the first object in the virtual space (Figure 6 and Paragraphs 9-10 and 33-37; captured images used to generate 3d content for ROI (i.e. save by goalkeeper));
Wang further discloses displaying on a display, the second virtual viewport image (Paragraphs 4 and 9-10 and 37; render reconstructed 3d model).
However, Wang may not explicitly disclose obtaining input of drawing of a line by a user on a screen in which the first virtual viewpoint image is displayed;
performing conversion of the drawn line into a second object that is not included in the images captured by the plurality of image capturing apparatuses and is arranged in the virtual space; and
wherein a timecode corresponding to the first virtual viewpoint image when the line is drawn is associated with the second object, and wherein in a case where the second virtual viewpoint image corresponding to the timecode associated with the second object is displayed, the second virtual viewpoint image is displayed based on the virtual space in which the second object is arranged.
Yee is provided because it discloses a configuration of virtual cameras. Yee also provides functionality where a ”line” is drawn on the display, the drawn input is then converted into an area where controls (second objects/uncaptured elements) are provided with the displayed viewport (Yee: Figure 4b Paragraphs 125-127).
Yee further discloses providing timecoded scenes which will correspond to the display of an object for playback (Figures 5a-c and Paragraphs 106, 109; temporal component for viewing space and 135-139; objects displayed according to timing playback). Also, gestures input which provide a second object (Figures 4a-c) will be associated with a time period within the scene (Paragraphs 103-104, 122 and 124).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide line drawn gesture as inputs on the display, along with timecoded object presentations in the view of Wang. One would have been motivated to provide the functionality as an improved viewing control and synchronization method for objects of interest with multiple capture mechanisms.
To further capture generating a second virtual viewpoint image based on the virtual space in which the first object and the second object are arranged (Yee: Figure 4b Paragraphs 125-127); Tong is also disclosed.
Tong is provided because it discloses camera image capture for 3d/virtual generation (Figure 2 and Column 2, Lines 30-32). Further Tong discloses a presentation that provides within the 3d display a second graphical object (overlay) (Figure 3:300). Additionally, the virtual space can build a 3D presentation of content not originally captured by a camera (Figure 4 and Column 5, Line 54-Column 6, Line 4).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide secondary content through graphical overlays or inferred content generation utilizing the capture mechanisms of Wang. One would have been motivated to provide the functionality because it enhances operability of the playback and provides new experiences enabling viewers closer interaction (Tong: background).
Claim 2: Wang, Yee and Tong disclose an apparatus according to claim 1, wherein the conversion is performed such that the second object is generated on a predetermined plane in the virtual space (Wang: Paragraphs 4 and 9-10; merging maps frames to be provided in viewport, Tong: Figure 3:300 and Yee: Figure 4a-c).
Claim 7: Wang, Yee and Tong disclose an apparatus according to claim 1, wherein the second virtual viewpoint image is displayed based on the virtual space in which another object based on another drawn line by another image processing apparatus is arranged (Wang: Figure 6:608 and Paragraph 36; additional data received from the terminal and Tong: Figure 3:300 and Yee: Yee: Figure 4b Paragraphs 119-127; secondary viewport with object displayed).
Claim 8: Wang, Yee and Tong disclose an apparatus according to claim 1, wherein information representing the virtual viewpoint designated by the user is obtained (Wang: Paragraphs 22, 25 and 28 and 33; user provided input pertaining to ROI).
Claim 10: Wang, Yee and Tong disclose an apparatus according to claim 8, wherein in a case where a first operation is performed by the user, the input of the drawing of the line (Yee: Figure 4b Paragraphs 125-127; drawn line) corresponding to the first operation is obtained (Wang: Paragraph 33; ROI obtained from user), and in a case where a second operation different from the first operation is performed by the user, information representing the virtual viewpoint designated is obtained in accordance with the second operation (Wang: Paragraphs 36-38; second operation obtains data pertaining to user terminal).
Claim 11: Wang, Yee and Tong disclose an apparatus according to claim 10, wherein the first operation and the second operation are operations on a touch panel, the first operation is an operation by a finger of the user on the touch panel, and the second operation is an operation by a rendering device (Wang: Paragraphs 27-28; touchscreen input can utilize finger input).
Claim 14: Wang, Yee and Tong disclose an apparatus according to claim 1, wherein the method further comprises: specifying a three-dimensional model of a specific object corresponding to a position where the line is drawn in the first virtual viewpoint image, wherein, at a three-dimensional position corresponding to the three-dimensional model of the specific object in the virtual space, the second virtual viewpoint image on which the second object corresponding to the drawn line is arranged is displayed (Wang: Paragraphs 32-33 and 36-37; 3D model produced according to region of interest and provide additional detail; Tong: Figure 3:300 and Yee: Figure 4b Paragraphs 125-127; object displayed within viewport around line drawn).
Claims 17-18 are similar in scope to claim 1 and therefore rejected under the same rationale.
Method (Wang: abstract and Paragraph 6)
Non-transitory computer-readable medium (Yee: Paragraph 16 and Tong: Column 16, Lines 13-15).
Claims 3 and 9 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Wang et al. (“Wang” 20140340404 A1), Yee (20190083885 A1) and Tong et al. (“Tong” 11257282 B2) in further view of Delamont (20200368616 A1).
Claim 3: Wang, Yee and Tong disclose an apparatus according to claim 2, but may not explicitly disclose wherein the conversion is performed such that a point of the second object corresponding to a point where the line is drawn is generated at an intersection, with respect to the predetermined plane, of a line passing through a virtual viewpoint corresponding to the virtual viewpoint image when the line is drawn and the point where the line is drawn (Tong: Figure 3:300 and Yee: Figure 4a-b Paragraphs 125-127; line drawn).
Delamont is provided because it discloses a mixed reality functionality that utilizes adding additional data at point /line of intersection within a space (viewport) (Claim 22) “2D images among other forms of images or video that are augmented over the user's real-world view of the space of the laser tag game or that of arena surrounding walls, ceilings, floors, objects and structure at the point of intersection (x,y.z) that the IR beam or IR Laser beam intersected with or was detected by an IR Sensor/Receiver, where the image is augmented using the spatial mapping data , volumetric, geometric, mesh data 3D models of user's real-world space; ”. Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide the application of data at a specified point within the view of Wang. One would have been motivated to provide the functionality as a way to include supplemental data realistically and proportionally.
Claim 9: Wang, Yee and Tong disclose an apparatus according to claim 8, but may not explicitly disclose wherein the information representing the virtual viewpoint includes information representing a position of the virtual viewpoint and a line-of-sight direction from the virtual viewpoint
Delamont is provided because it discloses a mixed reality functionality where presentation incorporates line of sight (Paragraphs 43, 605 and 1138). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide a line of sight point of view within the view of Wang. One would have been motivated to provide the functionality as a way to capture and present scenery accurately within the space (Delamont: paragraph 97), while expanding on the ROI selection.
Claim 4 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Wang et al. (“Wang” 20140340404 A1), Yee (20190083885 A1) and Tong et al. (“Tong” 11257282 B2) in further view of Dialameh et al. (“Dialameh” 8605141 B2).
Claim 4: Wang, Yee and Tong disclose an apparatus according to claim 1, but may not explicitly disclose wherein the second virtual viewpoint image that does not include the second object is displayed in a case where the converted second object does not exist in a range of a field of view based on the virtual viewpoint, and the second virtual viewpoint image including the object is displayed in a case where the converted second object exists in the range (Tong: Figure 3:300; camera provides possible out of range options and Yee: Figure 4a-c; second objects do not originally exist).
Dialameh is provided because it discloses a mixed reality functionality that utilizes capture functionality to process out of view objects and incorporate them into a 3d digital model (claim 1). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide out of view inclusion within the viewport presentation of Wang. One would have been motivated to provide the functionality as a way to capture more information for presentation and context, offering a more robust rendering.
Claim 6 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Wang et al. (“Wang” 20140340404 A1), Yee (20190083885 A1) and Tong et al. (“Tong” 11257282 B2) in further view of Easwar (20040017393 A1).
Claim 6: Wang, Yee and Tong disclose an apparatus according to claim 1, but may not explicitly disclose wherein the method further comprises, storing in a storage, the timecode and the second object in association with each other (Yee: Figure 4a-c and Paragraph 103-104; gesture presentation can be associated with time).
Easwar is provided because it discloses a viewport presentation functionality, further the viewport is constructed utilizing spatial and temporal layering in order to associate objects with a time (Paragraphs 65-67), further values and attributes (i.e. timecode) are stored for merging and presentation of a viewport(Paragraphs 171-172). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide time and data mapping with the viewport presentation of Wang. One would have been motivated to provide the functionality as a way to provide effective mapping ensuring data is correctly correlated with an intended object during rendering.
Claims 12-13 is/are rejected under 35 U.S.C. 103 as being
unpatentable over Wang et al. (“Wang” 20140340404 A1), Yee (20190083885 A1) and Tong et al. (“Tong” 11257282 B2) in further view of Bell et al. (“Bell” 9324190 B2).
Claim 12: Wang, Yee and Tong disclose an apparatus according to claim 1, but may not explicitly disclose wherein the image processing apparatus controls to display the second object in a first color if a difference between a timecode corresponding to the first virtual viewpoint image to which the line is drawn and a timecode of the second virtual viewpoint image to be displayed is a first value, and controls to display the second object in a second color lighter than the first color if the different is a value larger than the first value (Wang: Paragraphs 37-38; modification of resolution which includes color saturation Yee: Figure 4a-c Paragraphs 122 and 124-127; drawn line/timecoded).
Bell is provided because it discloses a functionality of capturing 3d scenes and further provides functionality where scenes/frames are misaligned with the data, those scenes/objects are specifically highlighted (Column 8, Lines 37-52). This highlighting of data could provide multiple forms of presentation (i.e. color differentiation). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to utilize a known technique to improve a similar device and provide highlighting of differences within the view of Wang. One would have been motivated to provide the functionality as a way to capture accurate mapping within the space.
Claim 13: Wang, Yee, Tong and Bell disclose an apparatus according to claim 12, wherein the image processing apparatus to display the second object in a light color by changing at least one of a degree of transparency, brightness, color saturation, and chromaticity of the second object (Wang: Paragraphs 37-38; modification of resolution which includes color saturation Yee: Figures 4a-c; gesture inputs provide shading).
Response to Arguments
Applicant’s arguments have been fully considered. Reference Yee previously disclosed provides additional cited functionality where a ”line” is drawn on the display, the drawn input is then converted into an area where controls (second objects/uncaptured elements) are provided with the displayed as a second viewport (Figure 4b Paragraphs 125-127). The claims as currently recited only require a conversion where an element (i.e. marker/icon/avatar-applicant specification paragraph 92) not captured by the camera or previously presented is provided in a second viewport in response to an input. Yee and Tong provide first views where inputs cause elements (controls/camera icons) to be incorporated in the display as a second view.
Conclusion
The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure:
20090154794 A1 [0033]
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice.
Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD KEATON whose telephone number is 571-270-1697. The examiner can normally be reached 9:30am to 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor MICHELLE BECHTOLD can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHERROD L KEATON/ Primary Examiner, Art Unit 2148
3-4-2026