Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
This is in response to applicant’s amendment/response filed on 03/02/2016, which has been entered and made of record. Claims 1, 16, 20 have been amended. No claim has been cancelled. No claim has been added. Claims 1-20 are pending in the application.
Response to Arguments
Applicant's arguments filed on 03/02/2016 regarding claims rejection under 35 U.S.C 102 have been fully considered but they are not persuasive.
Applicant submits “Sakakima's invisible designation applies to the physical camera, not the virtual camera's view of the scene, and as shown in Fig. 6 of Sakakima, the virtual camera can still view the constituent point even when a camera is marked invisible.” (Remarks, Page 15.).
The examiner disagrees with Applicant’s premises and conclusion. The term “the dynamic scene element is invisible relative to the virtual camera at the first moment” is very broad. Sakakima teaches “Fig. 4A and Fig. 4B, ¶0063, “Symbols 405 to 409 indicate cameras that capture images necessary for generating a virtual viewpoint image.” ¶0064, “the camera 407 whose orientation is similar to that of the virtual camera 404 is selected and coloring is performed by using the image captured by the camera 407.” ¶0069, “information indicating whether or not the constituent point is visible is described in a list shown in FIG. 7 in association with the camera ID.” ¶0075, “it may also be possible to select the camera whose image capturing direction is the most similar to that of the virtual camera 604 among the cameras not excluded.” When physical camera is determined to be invisible, the corresponding image is not selected. Such image is invisible relative to the virtual camera because the physical camera that captured the image is not selected for the virtual camera. Therefore, “the dynamic scene element is invisible relative to the virtual camera at the first moment”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-6, 9, 12-16, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sakakima (US Pub 2021/0133944 A1) in view of Xu (CN 111080798A, Google English translation provided.)
As to claim 1, Sakakima discloses a method for rendering a scene with dynamic object elimination, performed by a processor (Sakakima, Abstract), the method comprising:
obtaining a first scene space unit in which a dynamic scene element in a scene space is located at a first moment, wherein a first number of scene space units form the scene space (Sakakima, ¶0063, “FIG. 4A is an example of a virtual viewpoint image that is generated in the present embodiment and a diagram showing an image viewed from behind a goal net.”);
obtaining a first photographing space unit in which a virtual camera is located in a photographing space at the first moment, wherein a second number of photographing space units form the photographing space (Sakakima, ¶0063, “FIG. 4B is a diagram in which the situation in FIG. 4A is observed from above for explaining a positional relationship among a goal net, a goal frame, and a virtual camera indicating a virtual viewpoint. In FIG. 4A and FIG. 4B, symbol 401 indicates a goal frame and symbol 402 indicates a goal net. For the sake of explanation, FIG. 4B is a diagram in which the portion of the crossbar of the goal frame 401 is omitted. Symbol 403 indicates an image capturing-target object and in this case, a player (specifically, goal keeper). Symbol 404 indicates a virtual camera and a virtual viewpoint image is generated for the viewpoint from this virtual camera. Symbols 405 to 409 indicate cameras that capture images necessary for generating a virtual viewpoint image.”);
determining a visibility relationship between the first scene space unit and the first photographing space unit based on prestored visibility data, wherein the prestored visibility data comprises visibility relationships between the first number of the scene space units and the second number of the photographing space units (Sakakima, ¶0072, “the processing at S501 to S506 is explained supplementally by using FIG. 6. In the case shown in FIG. 6, among the cameras 605 to 609, only the camera 609 is not determined to be visible because the constituent point 610 is shielded by another point configuring the three-dimensional model of the object (player) 603 (NO at 502). It is assumed that the goal net 602 is not defined as an object that shields (occludes) the constituent point 610 and the camera 607 is determined to be visible. Further, the information indicating whether or not visible for each camera, which is acquired as the results of the determination at 502, is stored by using a list 701 as shown in FIG. 7 at S503 or S504. In the list 701, a column that stores the information indicating the results of the visibility determination is provided and in a case of being visible, “1” is stored for each camera and on the other hand, in a case of being invisible, “0” is stored.”);
when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, removing the dynamic scene element from scene elements comprised in the scene space and obtaining remaining dynamic scene elements in the scene space (Sakakima, ¶0073, “the pixel value of the pixel corresponding to the constituent point included in the captured image of the camera determined to be visible is acquired at S507. Then, the acquired pixel value is described in the list 701 shown in FIG. 7. Part or all of the pixel values acquired at this step are used for coloring to the constituent point in subsequent processing.” ¶0075, “At S509, from among the visible cameras except for the camera excluded at S507, the captured image captured by the camera that is used for coloring to the constituent point 610 is selected.”);
wherein when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment (Fig. 4A and Fig. 4B, ¶0063, “Symbol 404 indicates a virtual camera and a virtual viewpoint image is generated for the viewpoint from this virtual camera. Symbols 405 to 409 indicate cameras that capture images necessary for generating a virtual viewpoint image.” ¶0064, “the camera 407 whose orientation is similar to that of the virtual camera 404 is selected and coloring is performed by using the image captured by the camera 407.” ¶0069, “information indicating whether or not the constituent point is visible is described in a list shown in FIG. 7 in association with the camera ID.” ¶0075, “it may also be possible to select the camera whose image capturing direction is the most similar to that of the virtual camera 604 among the cameras not excluded.” When physical camera is determined to be invisible, the corresponding image is not selected. Such image is invisible relative to the virtual camera because the physical camera that captured the image is not selected for the virtual camera. Therefore, “the dynamic scene element is invisible relative to the virtual camera at the first moment”.)
and
rendering a scene picture in the first moment by rendering content within an angle of view of a first virtual camera in the scene space based on the remaining dynamic scene elements in the scene space (Sakakima, ¶0004, “generating a three-dimensional model in the image generation apparatus, performing processing, such as rendering, and transmitting the virtual viewpoint contents to a user terminal.” ¶0060, “a case is explained mainly where a virtual viewpoint image is generated by performing rendering after the image generation apparatus 122 performs the coloring processing for each component (each point) of the three-dimensional model generated based on the captured images. In this case, the value of each pixel of the virtual viewpoint image is determined based on the color of the component of the colored three-dimensional model and the virtual viewpoint.”).
Assuming arguendo that Sakakima does not disclose when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment.
Xu teaches when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment (Xu, Page 4, “determining whether a shadow of the target model is visible with respect to a virtual camera within the spatial cell based on whether the simulated light is occluded” “judging whether a target model in the space cell rendering range is visible relative to a virtual camera in the space cell” Page 6, step 240, “if after building multiple simulated rays over multiple iterations, all of the simulated rays are found to be occluded, the shadow of the object model may be considered invisible to the virtual camera within the spatial cell. That is, when the virtual camera moves to the spatial cell, the shadow of the target model may not be rendered.”).
Shakakima and Xu are considered to be analogous art because all pertain to image generation. It would have been obvious before the effective filing date of the claimed invention to have modified Shakakima with the features of “when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment” as taught by Xu. The suggestion/motivation would have been in order to determine whether an object is rendered when an object is occluded (Xu, background).
As to claim 4, claim 1 is incorporated and Sakakima discloses the prestored visibility data comprises the second number of binary sequences, and each binary sequence is used for indicating first visibility relationships between the first number of the scene space units and a same photographing space unit (Sakakima, ¶0080, “A mask information setting unit 802 performs processing to set information indicating whether or not an area that can be made use of for coloring within a captured image for each camera of a plurality of cameras used for image capturing.” ¶0081, “The mask information in the present embodiment is information relating to an occlusion area within a captured image and information indicating whether the area may be used for coloring and the like. The occlusion area is an area having a possibility that a target object is shielded (occluded) by another object (referred to as shielding object) existing on a line connecting the target object and a camera. For example, the mask information is an image for explicitly indicating that coloring should not be performed by using the pixel value of the area at the time of performing coloring processing for the three-dimensional model representing the target object that is hidden behind the shielding object, such as a goal frame.”).
As to claim 5, claim 4 is incorporated and Sakakima discloses each binary sequence comprises at least the first number of bits, a value of each bit is a first value or a second value, wherein: the first value is used for indicating that a first visibility relationship between a scene space unit corresponding to the bit and the same photographing space unit is invisible, and the second value is used for indicating that the first visibility relationship between the scene space unit corresponding to the bit and the same photographing space unit is visible (Sakakima, ¶0072, “the processing at S501 to S506 is explained supplementally by using FIG. 6. In the case shown in FIG. 6, among the cameras 605 to 609, only the camera 609 is not determined to be visible because the constituent point 610 is shielded by another point configuring the three-dimensional model of the object (player) 603 (NO at 502). It is assumed that the goal net 602 is not defined as an object that shields (occludes) the constituent point 610 and the camera 607 is determined to be visible. Further, the information indicating whether or not visible for each camera, which is acquired as the results of the determination at 502, is stored by using a list 701 as shown in FIG. 7 at S503 or S504. In the list 701, a column that stores the information indicating the results of the visibility determination is provided and in a case of being visible, “1” is stored for each camera and on the other hand, in a case of being invisible, “0” is stored.”).
As to claim 6, claim 5 is incorporated and Sakakima discloses the scene space is a space which at least one dynamic scene element in a virtual environment probabilistically reaches, and each binary sequence comprises a first sequence section and a second sequence section, wherein: the first sequence section is used for indicating the first visibility relationships between the first number of the scene space units and the same photographing space unit, and the second sequence section is used for indicating second visibility relationships between at least one static scene element and the same photographing space unit in the virtual environment (Sakakima, ¶0072, “the processing at S501 to S506 is explained supplementally by using FIG. 6. In the case shown in FIG. 6, among the cameras 605 to 609, only the camera 609 is not determined to be visible because the constituent point 610 is shielded by another point configuring the three-dimensional model of the object (player) 603 (NO at 502). It is assumed that the goal net 602 is not defined as an object that shields (occludes) the constituent point 610 and the camera 607 is determined to be visible. Further, the information indicating whether or not visible for each camera, which is acquired as the results of the determination at 502, is stored by using a list 701 as shown in FIG. 7 at S503 or S504. In the list 701, a column that stores the information indicating the results of the visibility determination is provided and in a case of being visible, “1” is stored for each camera and on the other hand, in a case of being invisible, “0” is stored.”).
As to claim 9, claim 1 is incorporated and Sakakima discloses prior to the determining of the visibility relationship between the first scene space unit and the first photographing space unit in the photographing space according to the prestored visibility data, the method further comprises: determining space unit parameter information, a first region range of the scene space, and a second region range of the photographing space, wherein the space unit parameter information comprises size parameters of the photographing space units and the scene space units; dividing the scene space into the first number of the scene space units and dividing the photographing space into the second number of the photographing space units according to the space unit parameter information, the first region range, and the second region range; obtaining visibility data by determining the visibility relationships between the first number of the scene space units and the second number of the photographing space units; and storing the visibility data (Sakakima, Fig. 6, Fig. 9A-9B, Fig. 11, ¶0091, “In FIG. 11, symbol 1101 indicates a goal frame, symbol 1102 indicates a goal net, and symbol 1103 indicates an object (player). FIG. 11 shows a case where one image is selected by using mask information from among images captured by cameras 1105 to 1109, respectively, in coloring processing for a constituent point 1110, which is one of constituent points corresponding to the object 1103.” ¶0092, “A camera list 1201 is a list that stores information indicating whether or not visible, mask information, and pixel values for each camera. As described previously, the information indicating whether or not visible is stored in the camera list 1201 at S1003 or S1005. Further, the mask information is stored in the cameral list 1201 at S1004. The mask information in the present embodiment indicates whether or not a mask exists and the mask type in a case of the mask area. Here, as the value that the mask information can take, a value indicating that the area is not a mask area (there is no mask) is defined as “0”. Further, the value indicating that the area is a mask area by an object that shields completely, such as the goal frame, is defined as “1” and the value indicating that the area is an area in which an area that is shielded by an object, such as the goal net, and an area that is not shielded exist in a mixed manner is defined as “2”.”).
As to claim 12, claim 1 is incorporated and Sakakima discloses sizes of the scene space units are correlated with at least one of the following: a ground surface type of the scene space, and a maximum size of a scene element in the scene space (Sakakima, Fig. 6 and Fig. 11, ¶0040, “the control station 124 installs a marker on the image capturing-target field and by using the image captured by each camera 112, derives the position and the orientation in the world coordinates of each camera and the focal length. Information on the derived position, orientation, and focal length of each camera is transmitted to the image generation apparatus 122. The data of the three-dimensional model and the information on each camera transmitted to the image generation apparatus 122 are used at the time of the image generation apparatus 122 generating a virtual viewpoint image.”).
As to claim 13, claim 1 is incorporated and Sakakima discloses the scene space has more than one unit division modes, the more than one unit division modes corresponding to different scene element types and different visibility data; and the visibility relationship associated with the dynamic scene element is determined according to first visibility data, wherein the first visibility data corresponds to a scene element type to which the dynamic scene element pertains (Sakakima, Fig. 6, Fig. 9A-9B, Fig. 11, ¶0091, “In FIG. 11, symbol 1101 indicates a goal frame, symbol 1102 indicates a goal net, and symbol 1103 indicates an object (player). FIG. 11 shows a case where one image is selected by using mask information from among images captured by cameras 1105 to 1109, respectively, in coloring processing for a constituent point 1110, which is one of constituent points corresponding to the object 1103.” ¶0092, “A camera list 1201 is a list that stores information indicating whether or not visible, mask information, and pixel values for each camera. As described previously, the information indicating whether or not visible is stored in the camera list 1201 at S1003 or S1005. Further, the mask information is stored in the cameral list 1201 at S1004. The mask information in the present embodiment indicates whether or not a mask exists and the mask type in a case of the mask area. Here, as the value that the mask information can take, a value indicating that the area is not a mask area (there is no mask) is defined as “0”. Further, the value indicating that the area is a mask area by an object that shields completely, such as the goal frame, is defined as “1” and the value indicating that the area is an area in which an area that is shielded by an object, such as the goal net, and an area that is not shielded exist in a mixed manner is defined as “2”.”).
As to claim 14, claim 1 is incorporated and Sakakima discloses the dynamic scene element is a first dynamic scene element, and wherein after the rendering the scene, the method further comprises: determining a second scene space unit in which a second dynamic scene element in the scene space is located at a second moment, wherein the second moment is after the first moment and the second dynamic scene element is a scene element that exists in the scene space at the second moment; determining a fifth visibility relationship between the second scene space unit and a second photographing space unit in the photographing space according to the prestored visibility data, wherein the second photographing space unit refers to a photographing space unit in which the virtual camera is located at the second moment; removing the second dynamic scene element from the scene elements comprised in the scene space, to obtain remaining scene elements in the scene space when the fifth visibility relationship between the second scene space unit and the second photographing space unit is invisible; and rendering content within the angle of view of the virtual camera in the scene space based on the remaining scene elements in the scene space, to obtain the scene picture at the second moment (Sakakima, Fig. 6, Fig. 9A-9B, Fig. 11, ¶0091, “In FIG. 11, symbol 1101 indicates a goal frame, symbol 1102 indicates a goal net, and symbol 1103 indicates an object (player). FIG. 11 shows a case where one image is selected by using mask information from among images captured by cameras 1105 to 1109, respectively, in coloring processing for a constituent point 1110, which is one of constituent points corresponding to the object 1103.” ¶0092, “A camera list 1201 is a list that stores information indicating whether or not visible, mask information, and pixel values for each camera. As described previously, the information indicating whether or not visible is stored in the camera list 1201 at S1003 or S1005. Further, the mask information is stored in the cameral list 1201 at S1004. The mask information in the present embodiment indicates whether or not a mask exists and the mask type in a case of the mask area. Here, as the value that the mask information can take, a value indicating that the area is not a mask area (there is no mask) is defined as “0”. Further, the value indicating that the area is a mask area by an object that shields completely, such as the goal frame, is defined as “1” and the value indicating that the area is an area in which an area that is shielded by an object, such as the goal net, and an area that is not shielded exist in a mixed manner is defined as “2”.”).
As to claim 15, claim 14 is incorporated and Sakakima discloses the dynamic scene element is a first dynamic scene element, and wherein the first dynamic scene element occupies a plurality of the scene space units, and the method further comprises: determining, from the plurality of the scene space units, the first scene space unit having a visibility relationship with the first photographing space unit being invisible; and removing an element part of the dynamic scene element located in the invisible first scene space unit, to obtain a remaining element part of the dynamic scene element, wherein the remaining scene elements in the scene space comprise the remaining element part of the dynamic scene element (Sakakima, Fig. 6, Fig. 9A-9B, Fig. 11, ¶0091, “In FIG. 11, symbol 1101 indicates a goal frame, symbol 1102 indicates a goal net, and symbol 1103 indicates an object (player). FIG. 11 shows a case where one image is selected by using mask information from among images captured by cameras 1105 to 1109, respectively, in coloring processing for a constituent point 1110, which is one of constituent points corresponding to the object 1103.” ¶0092, “A camera list 1201 is a list that stores information indicating whether or not visible, mask information, and pixel values for each camera. As described previously, the information indicating whether or not visible is stored in the camera list 1201 at S1003 or S1005. Further, the mask information is stored in the cameral list 1201 at S1004. The mask information in the present embodiment indicates whether or not a mask exists and the mask type in a case of the mask area. Here, as the value that the mask information can take, a value indicating that the area is not a mask area (there is no mask) is defined as “0”. Further, the value indicating that the area is a mask area by an object that shields completely, such as the goal frame, is defined as “1” and the value indicating that the area is an area in which an area that is shielded by an object, such as the goal net, and an area that is not shielded exist in a mixed manner is defined as “2”.”).
As to claim 16, Sakakima discloses an apparatus for rendering a scene with dynamic object elimination, the apparatus comprising: at least one memory configured to store program code; and at least one processor configured to read the program code and operate as instructed by the program code, the program code comprising: first obtaining code configured to cause the at least one first processor to obtain a first scene space unit in which a dynamic scene element in a scene space is located at a first moment, wherein a first number of scene space units form the scene space; second obtaining code configured to cause the at least one first processor to obtain a first photographing space unit in which a virtual camera is located in a photographing space at the first moment, wherein a second number of photographing space units form the photographing space; first determining code configured to cause the at least one first processor to determine a visibility relationship between the first scene space unit and the first photographing space unit based on prestored visibility data, wherein the prestored visibility data comprises visibility relationships between the first number of the scene space units and the second number of the photographing space units; first removing code configured to cause the at least one first processor to, when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, remove the dynamic scene element from scene elements comprised in the scene space and obtaining remaining dynamic scene elements in the scene space, wherein when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment; and first rendering code configured to cause the at least one first processor to render a scene picture in the first moment by rendering content within an angle of view of a first virtual camera in the scene space based on the remaining dynamic scene elements in the scene space (See claim 1 for detailed analysis.).
As to claim 19, claim 16 is incorporated and Sakakima discloses the prestored visibility data comprises the second number of binary sequences, and each binary sequence is used for indicating first visibility relationships between the first number of the scene space units and a same photographing space unit (See claim 4 for detailed analysis.).
As to claim 20, Sakakima discloses a non-transitory computer-readable medium storing program code which, when executed by one or more processors of a device for rendering a scene with dynamic object elimination, cause the one or more processors to at least: obtain a first scene space unit in which a dynamic scene element in a scene space is located at a first moment, wherein a first number of scene space units form the scene space; obtain a first photographing space unit in which a virtual camera is located in a photographing space at the first moment, wherein a second number of photographing space units form the photographing space; determine a visibility relationship between the first scene space unit and the first photographing space unit based on prestored visibility data, wherein the prestored visibility data comprises visibility relationships between the first number of the scene space units and the second number of the photographing space units; when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, remove the dynamic scene element from scene elements comprised in the scene space and obtaining remaining dynamic scene elements in the scene space, wherein when the visibility relationship between the first scene space unit and the first photographing space unit is invisible, the dynamic scene element is invisible relative to the virtual camera at the first moment; and render a scene picture in the first moment by rendering content within an angle of view of a first virtual camera in the scene space based on the remaining dynamic scene elements in the scene space (See claim 1 for detailed analysis.).
Claims 2-3, 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Sakakima (US Pub 2021/0133944 A1) in view of Xu (CN 111080798A) and Brandt et al. (US Pub 2025/0022212 A1).
As to claim 2, claim 1 is incorporated and Sakakima discloses the obtaining the first scene space unit comprises:
obtaining coordinate information of the dynamic scene element and size information of the dynamic scene element at the first moment (Sakakima, ¶0040, “the control station 124 installs a marker on the image capturing-target field and by using the image captured by each camera 112, derives the position and the orientation in the world coordinates of each camera and the focal length. Information on the derived position, orientation, and focal length of each camera is transmitted to the image generation apparatus 122. The data of the three-dimensional model and the information on each camera transmitted to the image generation apparatus 122 are used at the time of the image generation apparatus 122 generating a virtual viewpoint image.”);
Shakakima does not discloses determining a center point of the dynamic scene element based on the coordinate information of the dynamic scene element and the size information of the dynamic scene element; and determining a scene space unit, from the first number of the scene space units, in which the center point of the dynamic scene element is located as the first scene space unit.
Brandt teaches determining a center point of the dynamic scene element based on the coordinate information of the dynamic scene element and the size information of the dynamic scene element; and determining a scene space unit, from the first number of the scene space units, in which the center point of the dynamic scene element is located as the first scene space unit (Brandt, “[0175] FIG. 6 shows the alignment between a coordinate system associated with the scene 400 and the range of viewing angles at which the object is shown. This figure may be further explained as follows: a video stream(s) provided by the server system may show the object at a limited set of angles. For a client device, to be able to place the video-based representation of the object in the scene before or when rendering the scene, the client device may make use of one or more parameters: [0176] Object position. The object position, for example defined as X,Y,Z values, may define where in the scene the object should be placed, for example with the center of the object. [0177] Orientation. The orientation may define how the viewing angles of the object in the video stream(s) relate to the coordinate system of the scene, e.g., to the absolute north of the scene.”).
Shakakima and Brandt are considered to be analogous art because all pertain to image generation. It would have been obvious before the effective filing date of the claimed invention to have modified Shakakima with the features of “determining a center point of the dynamic scene element based on the coordinate information of the dynamic scene element and the size information of the dynamic scene element; and determining a scene space unit, from the first number of the scene space units, in which the center point of the dynamic scene element is located as the first scene space unit.” as taught by Brandt. The suggestion/motivation would have been in order to have the alignment between a coordinate system associated with the scene and the range of viewing angles at which the object is shown (Bramdt, ¶0175).
As to claim 3, claim 2 is incorporated and the combination of Shakakima and Brandt discloses the obtaining the first scene space unit comprises: determining an offset of the center point of the dynamic scene element relative to a starting point of the scene space at the first moment in at least one spatial dimension; obtaining a quantity of scene units between the starting point and the center point in the at least one spatial dimension by dividing, for each spatial dimension among the at least one spatial dimension, the offset corresponding to the spatial dimension by a size of the scene space unit in the spatial dimension; and determining the first scene space unit in which the dynamic scene element is located based on the quantity of scene units between the starting point and the center point in each spatial dimension (Bramdt, ¶0144, “the interval may be chosen to be centered with respect to a current or predicted relative direction (i.e., a current or predicted viewing angle), or may be offset so that the current or predicted viewing angle forms the minimum or maximum of the interval. In some examples, the width of the interval may be selected based on a distance from the viewing position to the object position, or vice versa, which distance may also be referred to as ‘relative distance’. For example, the width of the interval, the number of viewing angles within the interval, and/or the spacing of the viewing angles within the interval, may be selected based on the relative distance to the object.” [0178] Range: This parameter may define the range of viewing angles in the video stream(s) relatively to the center (position) of the object, as also shown in FIG. 6 and elsewhere also referred to as a ‘width’ of the range or width of an ‘interval’. The range may for example be defined in degrees between [0°, 360°]. [0179] Center Angle: This parameter may identify a reference view in the video stream(s). For example, for a spatial mosaic, the center angle may identify the mosaic tile which may show the object at the viewing angle at a time of the request by the client. The center angle parameter may be used together with the viewing position to select a mosaic tile from the spatial mosaic for a current viewing position, which may have changed with respect to the viewing position at a time of the request. [0180] Scale: This parameter may define how the object shown in the video stream(s) scales relatively to other content of the scene such as PRVA's. For example, if the object in the PRVA is shown at a width of 100 pixels and the object is shown in the video streams at a width of 200 pixels, the scale may be 2.0.”).
As to claim 17, claim 16 is incorporated and the combination of Shakakima and Brandt discloses the first obtaining code comprises: third obtaining code configured to cause the at least one first processor to obtain coordinate information of the dynamic scene element and size information of the dynamic scene element at the first moment; second determining code configured to cause the at least one first processor to determine a center point of the dynamic scene element based on the coordinate information of the dynamic scene element and the size information of the dynamic scene element; and third determining code configured to cause the at least one first processor to determine a scene space unit, from the first number of the scene space units, in which the center point of the dynamic scene element is located as the first scene space unit (See claim 2 for detailed analysis.).
As to claim 18, claim 17 is incorporated and the combination of Shakakima and Brandt discloses the first obtaining code further comprises: fourth determining code configured to cause the at least one first processor to determine an offset of the center point of the dynamic scene element relative to a starting point of the scene space at the first moment in at least one spatial dimension; fourth obtaining code configured to cause the at least one first processor to obtain a quantity of scene units between the starting point and the center point in the at least one spatial dimension by dividing, for each spatial dimension among the at least one spatial dimension, the offset corresponding to the spatial dimension by a size of the scene space unit in the spatial dimension; and fifth determining code configured to cause the at least one first processor to determine the first scene space unit in which the dynamic scene element is located based on the quantity of scene units between the starting point and the center point in each spatial dimension (See claim 3 for detailed analysis.).
Allowable Subject Matter
Claims 7-8, 10-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
7. The method according to claim 6, wherein the visibility relationship between the first scene space unit and the first photographing space unit is determined based on a third visibility relationship between a magnification space unit corresponding to the first scene space unit and the first photographing space unit, wherein the magnification space unit is a space unit having a size greater than that of the first scene space unit and comprising the first scene space unit.
8. The method according to claim 7, wherein a side length of the magnification space unit is equal to a sum of a side length of the first scene space unit and a length of a maximum dynamic scene element, and the maximum dynamic scene element has a maximum size in the virtual environment.
10. The method according to claim 9, wherein the visibility data comprises the first number of binary sequences, and each binary sequence is used for indicating the visibility relationships between the first number of the scene space units and a same photographing space unit, and wherein the storing the visibility data comprises: obtaining cluster sets by performing clustering on the second number of binary sequences according to Hamming distances; determining, for each cluster set, a respective central sequence corresponding to the cluster set according to binary sequences comprised in the cluster set; for each cluster set, when the respective central sequence corresponding to the cluster set meets a clustering stopping condition, indicating the cluster set by the respective central sequence corresponding to the cluster set instead of the binary sequences comprised in the cluster set; saving compressed visibility data, wherein the compressed visibility data comprises central sequences corresponding to the cluster sets; and indicating third visibility relationships between a plurality of photographing space units corresponding to the cluster set to which each respective central sequence pertains and the first number of the scene space units by the fourth visibility relationships between the photographing space unit indicated by the central sequence and the first number of the scene space units instead.
11. The method according to claim 10, wherein the determining the respective central sequence corresponding to the cluster set comprises: determining a first quantity and a second quantity according to values at ith bits of the binary sequences comprised in the cluster set, wherein the first quantity is a quantity of binary sequences having a value 1 at the ith bits, the second quantity is a quantity of binary sequences having a value 0 at the ith bits, i being a positive integer; determining a value at the ith bits of the respective central sequence corresponding to the cluster set according to a size relationship between the first quantity and the second quantity; and determining the respective central sequence corresponding to the cluster set according to values at bits of the respective central sequence corresponding to the cluster set.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/
Primary Examiner, Art Unit 2613