DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This action is responsive to communications: RCE & Amendment, filed on 04/28/2025. This action is made FINAL.
2. Claims 1, and 3-13 are pending in the case. Claims 1, 7 and 10-12 are independent claims. Claims 1, 3-7 and 10-11 have been amended. Claim 2 is cancelled. Claim 13 is newly added.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 3-12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Specification
The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: render camera (claims 1 and 10).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 and 3-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 7, and 10-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: a 3D display update request from a client and a (frame) sequence update that does not update all packets (e.g. geometry, color/texture) in synchronization each time another packet is updated, as Applicant’s Specification discloses (Para 19, 67; Fig. 9).
Claim 7 recites the limitation "the server command" in line 12. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the server command" in line 11. There is insufficient antecedent basis for this limitation in the claim.
Claims 3, 5-9 and 13 are rejected based on dependency from a rejected base claim.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 12 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
Claim 12 is directed to a computer readable recording medium for performing a method according to claim 1, while claim 1 is directed to a method for sending an object from a server to a client. The claim limitations of claim 12 do not further limit the method of claim 1. It appears the claim 12 is an independent claim in Itself claims a computer implemented method.
Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 3-5 and 7-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jangwon Lee et al., US 2020/0153885 A1 and further in view of Ruben Gonzalez, US 2007/0005795 A1.
Independent claim 1, Lee discloses a method for sending at least one 3D object from a server to a client, the server comprising at least one processor and a memory, the method comprising executing, by the at least one processor, instructions stored in the memory, for (i.e. send 3D data, via program instructions – Para 942 - from a transmission apparatus including a server, processor and data storage– Para 105; 115, 277 - to a reception processor – Para 124 – of a client – Para 594):
extracting color information-and alpha information on the server using a render camera, wherein the alpha information represents transparency to the color information of conventional RGB (i.e. Point cloud fusion/extraction: a process of modifying a previously acquired depth map to data capable of being encoded may be performed. For example, a pre-processing of allocating a location value of each object of image on 3D by modifying the depth map to a point cloud data type may be performed – Para 304); property data (color, reflectance, transparency, etc.) of multiple points– Para 338; After patch generation one or more attribute images may be generated based on the generated patches – Para 568);
extracting geometry information from the 3D object on the server using a depth camera (i.e. If there is a depth camera, a process of storing location information as to a depth of each object included in each image in image acquisition location may be performed – Para 303; Point cloud fusion/extraction: a process of modifying a previously acquired depth map to data capable of being encoded may be performed. For example, a pre-processing of allocating a location value of each object of image on 3D by modifying the depth map to a point cloud data type may be performed, and a data type capable of expressing 3D space information not the pointer cloud data type may be applied – Para 304);
simplifying the geometry information, wherein the simplifying the geometry information includes converting a cloud of points extracted from the 3D object to information of vertices of polygons representing a shape of the 3D object (i.e. The geometry, attribute, auxiliary data, and mesh data of the point cloud may each be configured as a separate stream or stored in different tracks in a file - Para 565; A texture picture/frame, which is a picture/frame representing the color information about each point constituting the point cloud video on a patch-by-patch basis, may be generated – Para 567; After patch generation, a geometry image, one or more attribute images, an occupancy map, auxiliary data, and/or mesh data may be generated based on the generated patches – Para 568); and
encoding and compressing a 3D stream including a server command, the color information, the alpha information, and the simplified geometry information (i.e. metadata needed to reconstruct the point cloud from the individual patches may be generated – Para 567; the auxiliary data, e.g. server command, represents metadata about a patch of a point cloud object – Para 573; the video encoder performs geometry video compression, attribute video compression, occupancy map compression, auxiliary data compression, and/or mesh data compression - Para 577; property data (color, reflectance, transparency, etc.) of multiple points– Para 338); and
sending a container stream of the encoded 3D stream from the server to the client (i.e. The encoding process for compressing the generated images and the added data to generate bitstreams may be performed and then the encapsulation process for converting the bitstreams to a file format for transmission or storage may be performed – Para 245; Fig. 13).
Lee suggests a server as Lee teaches a transmission apparatus including a server (Para 105; 115, 277).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to substitute a server for the transmission apparatus of Lee because the transmission apparatus includes a server. Thus, the substitution yields predictable results.
Lee discloses frame packing includes geometry and texture data arranged in a frame sequence (Para 848, 852).
Gonzalez discloses wherein the server command, the color information, the alpha information, and the simplified geometry information contained in the 3D stream are synchronized to each other in a frame sequence when the 3D stream is created on the server (i.e. encoding data comprising at least one of video, text, audio, music and/or graphics elements as a video packet stream – Para 66; quantising colour data in a video stream – Para 180; object control packets include color data and alpha information – Para 183, 650, 651; parameters, e.g. server command, inserted in the data stream – Para 295; Fig. 2), and
wherein a frame of the frame sequence is not required to include all of the server command, the color information, the alpha information, and the simplified geometry information (i.e. encoding data comprising at least one of video, text, audio, music and/or graphics elements as a video packet stream – Para 66; Each scene may contain one or more streams 82 which contain one or more separate simultaneous media objects 52… Server side interaction support is where user interaction, shown here as user control packets 69…Each object 52 can contain one or more frames 88 encapsulated within data packets. When more than one media object 52 is present in a scene 81, the packets for each are interleaved – Para 298; Fig. 4; The server 21 is responsible for managing, reading, and parsing partial bit streams from the correct source(s), constructing a composite bit streams based on user input with appropriate control instructions from the client 20, and forwarding the bit stream to the client 20 for decoding and rendering. – Para 307; Fig. 6; The display scene should be rendered whenever visual data is received from the server 21 according to synchronization information, when a user selects a button by clicking or drags an object that is draggable, and when animations are updated – Para 321; the purpose of the server system 21 is to (i) create the correct data stream for the client to decode and render (ii) to transmit said data reliably to the client…Since these media objects may be composited simultaneously into a single scene, advanced non-sequential access capabilities are provided on the part of the server 21 to select the appropriate data components from each media object stream in order to interleave them into the final composite data stream to send to the client 20 – Para 341), which Lee fails to disclose.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to combine Gonzalez’s method wherein the server command, the color information, the alpha information, and the simplified geometry information contained in the 3D stream are synchronized to each other in a frame sequence when the 3D stream is created on the server, and wherein a frame of the frame sequence is not required to include all of the server command, the color information, the alpha information, and the simplified geometry information with the method of Lee in the same field of endeavor transmitting 3D data via bitstream to a client for geometry reconstruction based on a user’s current view because transmitting interleaved packets of data in a single stream or self-contained object and modifying the stream to include the correct data in each stream based on scene changes provides rendering and interactive controls that provide the advantage of allowing users to control dynamic media composition (Gonzalez, abstract).
Claim 3, Lee discloses the method according to claim 1, wherein the stream further includes at least one of metadata and sound data (i.e. an audio signal is to be decoded and reproduced – Para 239; transmission side processing includes audio for transmission with the generated image – Para 243, 245).
Claim 4, Lee discloses the method according to claim 1, further comprising receiving a client command from the client to redraw the 3D object on the server (i.e. information of a ROI, region of interest, is delivered based on user’s request – Para 239).
Claim 5, Lee discloses the method according to claim 1, further comprising:
receiving a client command from the client to redraw the 3D object (i.e. tracked viewpoint of user’s region of interest is used for selection of a region of interest is delivered to the video transmission apparatus for use in file selection/extraction – Para 237), redrawing the 3D object on the server, extracting the color information, the alpha information and the geometry information from the redrawn 3D object, simplifying the geometry information, and encoding a second 3D stream including an additional command the color information, the alpha information and the simplified geometry information of the redrawn 3D object and sending the encoded stream from the server to the client (i.e. The transmitter feedback processor can deliver the feedback information to the stitcher, the projection processor, the region-wise packing processor, the data encoder, the encapsulation processor, the metadata processor and/or the transmission processor. The feedback information may be delivered to the metadata processor and then delivered to each internal element according to an embodiment. Upon reception of the feedback information, internal elements can reflect the feedback information in processing of 360 video data – Para 116; the transmission apparatus receives data input, e.g. image, depth, audio, metadata, - Para 272, 273 – that is processed, encoded and delivered to the reception side, e.g. client - Para 278; the auxiliary data, e.g. server command, represents metadata about a patch of a point cloud object – Para 573).
Gonzalez discloses wherein the additional command, the color information, the alpha information, and the simplified geometry information of the redrawn 3D object contained in the second 3D stream are synchronized to each other in a second frame sequence when the second 3D stream is created on the server (i.e. Server side interaction support is where user interaction, shown here as user control packets 69…Each object 52 can contain one or more frames 88 encapsulated within data packets. When more than one media object 52 is present in a scene 81, the packets for each are interleaved – Para 298; Fig. 4; The server 21 is responsible for managing, reading, and parsing partial bit streams from the correct source(s), constructing a composite bit streams based on user input with appropriate control instructions from the client 20, and forwarding the bit stream to the client 20 for decoding and rendering. – Para 307; Fig. 6; The display scene should be rendered whenever visual data is received from the server 21 according to synchronization information, when a user selects a button by clicking or drags an object that is draggable, and when animations are updated – Para 321), which Lee fails to disclose.
Similar rationale as applied in the rejection of claim 1 applies herein.
Claim 6, Lee discloses the method according to claim 1, wherein the color information and the alpha information are obtained by the RGB camera (i.e. property data including color and transparency are obtained – Para 338 – via camera – Para 544; RGB camera extract color information corresponding to depth information – Para 339; the property data includes (color, reflectance, transparency, etc.) of multiple points – Para 338) and the geometry information is obtained by at least one depth camera (i.e. scene acquired via a depth camera – Para 231, 299).
Independent claim 7, Lee discloses a method for reproducing a 3D object on a client, the 3D object being present on a server, and the client comprising at least one processor and a memory, the method comprising executing, by the at least one processor, instructions stored in the memory (i.e. on a reception device that receives 3D data sent – Fig. 22 -, via program instructions – Para 942 - from a transmission apparatus including a server, processor and data storage– Para 105; 115, 277 - to a reception processor – Para 124 – of a client – Para 594), for:
receiving from the server, a container stream of an encoded 3D stream including color information, alpha information, and simplified geometry information of the 3D object (i.e. receive transmitted encoded and compressed bitstream – Fig. 22; The image encoder includes geometry video compression, attribute video compression, occupancy map compression, auxiliary data compression, and mesh data compression – Para 564; frame packing includes geometry and texture data arranged in a frame sequence - Para 848, 852), wherein the simplified geometry information includes a converted cloud of points extracted from the 3D object to information of vertices o(i.e. In the mesh data generation, mesh data is generated from the patches. Mesh represents connection information between neighboring points. For example, it may represent data of a triangular shape. For example, mesh data according to the embodiments refers to connectivity between the points. – Para 574; The point cloud pre-processor or controller generates metadata related to patch generation, geometry image generation, attribute image generation, occupancy map generation, auxiliary data generation, and mesh data generation- Para 575);
decompressing and decoding the encoded 3D stream and extracting the color information, the alpha information, and the simplified geometry information from the decoded 3D stream, wherein the alpha information represents transparency to the color information of conventional RGB (i.e. a reception apparatus includes a video decoder – Para 592; The video decoder includes geometry video decompression, attribute video decompression, occupancy map decompression, auxiliary data decompression, and/or mesh data decompression. The image decoder includes geometry image decompression, attribute image decompression, occupancy map decompression, auxiliary data decompression, and/or mesh data decompression. Point cloud processing includes geometry reconstruction and attributes reconstruction – Para 592; property data (color, reflectance, transparency, etc.) of multiple points– Para 338);
synchronizing the color information, the alpha information, and the simplified geometry information in the frame sequence from the decoded 3D stream (i.e. Post-processing & composition: may mean a post-processing process for decoding and finally reproducing received/stored video/audio/text data – Para 224);
reproducing a shape of the 3D object based on the simplified geometry information (i.e. The point cloud processor (point cloud processing) performs geometry reconstruction and/or attributes reconstruction – Para 604); and
reconstruing the 3D object by projecting color/texture information on the reproduced shape of the 3D (i.e. In the attribute reconstruction, the attribute video and/or attribute image are reconstructed from the decoded attribute video and/or decoded attribute image based on the occupancy map, auxiliary data, and/or mesh data. According to embodiments, for example, the attribute may be a texture. According to embodiments, an attribute may refer to a plurality of pieces of attribute information. When there is a plurality of attributes, the point cloud processor according to the embodiments performs a plurality of attribute reconstructions – Para 606; performing patch generation – Para 611 – including object projection onto a plane – Para 612, 613).
Lee suggests a server as Lee teaches a transmission apparatus including a server (Para 105; 115, 277).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to substitute a server for the transmission apparatus of Lee because the transmission apparatus includes a server. Thus, the substitution yields predictable results.
Gonzalez discloses wherein the color information, the alpha information, and the simplified geometry information contained in the 3D stream are synchronized to each other in a frame sequence when the 3D stream is created on the server (i.e. encoding data comprising at least one of video, text, audio, music and/or graphics elements as a video packet stream – Para 66; quantising colour data in a video stream – Para 180; object control packets include color data and alpha information – Para 183, 650, 651; parameters, e.g. server command, inserted in the data stream – Para 295; Fig. 2), and
wherein a frame of the frame sequence is not required to include all of the server command, the color information, the alpha information, and the simplified geometry information (i.e. encoding data comprising at least one of video, text, audio, music and/or graphics elements as a video packet stream – Para 66; Each scene may contain one or more streams 82 which contain one or more separate simultaneous media objects 52… Server side interaction support is where user interaction, shown here as user control packets 69…Each object 52 can contain one or more frames 88 encapsulated within data packets. When more than one media object 52 is present in a scene 81, the packets for each are interleaved – Para 298; Fig. 4; The server 21 is responsible for managing, reading, and parsing partial bit streams from the correct source(s), constructing a composite bit streams based on user input with appropriate control instructions from the client 20, and forwarding the bit stream to the client 20 for decoding and rendering. – Para 307; Fig. 6; The display scene should be rendered whenever visual data is received from the server 21 according to synchronization information, when a user selects a button by clicking or drags an object that is draggable, and when animations are updated – Para 321; the purpose of the server system 21 is to (i) create the correct data stream for the client to decode and render (ii) to transmit said data reliably to the client…Since these media objects may be composited simultaneously into a single scene, advanced non-sequential access capabilities are provided on the part of the server 21 to select the appropriate data components from each media object stream in order to interleave them into the final composite data stream to send to the client 20 – Para 341); combining the color information and the alpha information to create color/texture information (i.e. quantising colour data in a video stream – Para 180; object control packets include color data and alpha information – Para 183, 650, 651; parameters, e.g. server command, inserted in the data stream – Para 295; Fig. 2), which Lee fails to disclose.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to combine Gonzalez’s method wherein the color information, the alpha information, and the simplified geometry information contained in the 3D stream are synchronized to each other in a frame sequence when the 3D stream is created on the server, and wherein a frame of the frame sequence is not required to include all of the server command, the color information, the alpha information, and the simplified geometry information; combining the color information and the alpha information to create color/texture information with the method of Lee in the same field of endeavor transmitting 3D data via bitstream to a client for geometry reconstruction based on a user’s current view because transmitting interleaved packets of data in a single stream or self-contained object and modifying the stream to include the correct data in each stream based on scene changes provides rendering and interactive controls that provide the advantage of allowing users to control dynamic media composition (Gonzalez, abstract).
Claim 8, Lee discloses the method according to claim 7, further including displaying the reconstructed 3D object on a display device (i.e. reconstructed 3D geometry is rendered and displayed – Para 353).
Claim 9, Lee discloses the method according to claim 8, wherein the display device is a smart glass, smart glasses, a smartphone, a cell phone, a tablet, a laptop computer, a head-mounted display (i.e. viewport information viewed through a HMD – Para 558), a headset (i.e. VR/AR/MR display provides feedback during a display process – Para 353, 354, 356, 561), a slate PC, a gaming terminal or AR-device.
Independent claim 10, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Independent claim 11, the claim is similar in scope to claim 7. Therefore, similar rationale as applied in the rejection of claim 7 applies herein.
Independent claim 12, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein.
Claim 13, Lee discloses the method according to claim 7 (Fig. 4, 44, Para 137, 143, 353, 356, 594).
Gonzalez discloses wherein the reproducing the shape of the 3D object and the projecting the color/texture information on the reproduced shape of the 3D object are synchronized depending on packet alignment in the frame sequence (i.e. The server 21 is responsible for managing, reading, and parsing partial bit streams from the correct source(s), constructing a composite bit stream based on user input with appropriate control instructions from the client 20, and forwarding the bit stream to the client 20 for decoding and rendering – Para 307; Fig. 6; The operation of the interaction management engine 41 is controlled by the object control component 40, which receives instructions (object control packets 68) sent from the server 21 that define how the interaction management engine 41 interprets user events 47 from the graphical user interface 73, and what animations and interactive behaviours are associated with individual media objects. The interaction management engine 41 is responsible for controlling the rendering engine 74 to carry out the rendering transformations – Para 319; the geometric transform is applied to each of the coordinates in the display list, and the alpha blending is performed during the scan conversion of the graphics primitives specified within the display list – Para 323), which Lee fails to disclose.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to include Gonzalez’s method wherein the reproducing the shape of the 3D object and the projecting the color/texture information on the reproduced shape of the 3D object are synchronized depending on packet alignment in the frame sequence with the method of Lee in the same field of endeavor transmitting 3D data via bitstream to a client for geometry reconstruction based on a user’s current view because managing user interaction defines object control packets transmitted to a client for use in object reconstruction, which provides the benefit of rendering display scene updates based on user interaction and according to server provided synchronization information (Gonzalez, Para 321).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHANTE E HARRISON/Primary Examiner, Art Unit 2615