DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08/29/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 102
1 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
2 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
3 Claim(s) 1-2, and 9-10 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Petkov et al. (US 10692267 B1).
4 Regarding claim 1, Petkov teaches a method for generating smooth transitions between animated data frames ([Abstract] reciting “Systems and methods are provided for generating smooth transitions between volume rendering presets when the volume rendering is used as part of an animation system.”), comprising:
(a) providing a client device having at least a processor, a memory storage device, and a user interface display ([Page 13; Column 3, Lines 33-38] reciting “The system includes a medical imaging device, a memory, an image processor, and a display. The medical imaging device is configured to acquire medical imaging data. The memory is configured to store rendering presets for a plurality of keyframes in a sequence of keyframes.”);
(b) identifying, by the processor, a viewable area bounding box on the user interface display ([Page 14; Column 5, Lines 61-67] reciting “FIG. 3 depicts an example of a constrained spline interpolator. FIG. 3 depicts a series of keyframes that are interpolated using two different interpolators 63—a constrained spline interpolator (solid line) and a non-constrained spline interpolator (dotted line). Both interpolators 63 perform well for interpolating values within the bounds.”; [Page 15; Column 7, Lines 59-65] reciting “FIG. 3 depicts an example of a constrained spline interpolator. FIG. 3 depicts a series of keyframes that are interpolated using two different interpolators 63—a constrained spline interpolator (solid line) and a non-constrained spline interpolator (dotted line). Both interpolators 63 perform well for interpolating values within the bounds.”);
(c) determining, by the processor, frame data within a selected timeframe to render the viewable area ([Page 14; Column 5, Lines 3-14] reciting “Using keyframes, a user may change the beam size of the light from one value to another within a predefined period of time. At the start of the animation, a beam size value is set. Another value is set for the end of the animation. Thus, the software program automatically interpolates the two values, creating a smooth transition in images rendered with the interpolated values. Another example is color. A voxel may be assigned two different colors at two different points in time, e.g. with two different keyframes. The intermediate frames may be generated automatically by interpolating the two colors over the time between the two keyframes.”);
(d) requesting, by the client device, the frame data from a data source ([Page 17; Column 12, Lines 48-55] reciting “Alternatively, data-driven easing may use a look-up table, with values derived from measurements of the rendered image. For example, a voxel fade effect may have a non-linear response to the time parameter in the animation system 100, after multiple voxels are projected along a viewing ray. A look-up table easing function may be used to enforce a linear transition in the rendered images.”);
(e) receiving, by the client device, the frame data ([Page 13; Column 3, Lines 32-36] reciting “In a third aspect, a system is provided for generating animated medical imaging data. The system includes a medical imaging device, a memory, an image processor, and a display. The medical imaging device is configured to acquire medical imaging data.”);
(f) loading, by the processor, the frame data into tile objects ([Abstract] reciting “A windowing-compensated look-up table interpolation is used to interpolate between adjacent keyframes that include user defined rendering presets.”; [Page 17; Column 11, Lines 11-15] reciting “The smooth transitions may also be used as part of an interactive viewer to animate rendering parameter changes during state transitions—e.g., loading a new preset, changing volume classification, undo/redo operations, etc.”);
(g) determining, by the processor, a transition position in a temporal range of animation positions from a beginning frame to an end frame ([Page 14; Column 5, Lines 3-9] reciting “Using keyframes, a user may change the beam size of the light from one value to another within a predefined period of time. At the start of the animation, a beam size value is set. Another value is set for the end of the animation. Thus, the software program automatically interpolates the two values, creating a smooth transition in images rendered with the interpolated values.”);
(h) interpolating, by the processor, at least one additional frame of data associated with the transition position utilizing bicubic interpolation ([Page 15; Column 7, Lines 38-40] reciting “Many variants may be used including the cubic spline, the natural cubic spline and the Catmull-Rom spline.”; [Page 16; Column 9, Lines 59-65] reciting “In some embodiments, the look-up table is constructed automatically based on a perceptual or non-perceptual image metric. FIG. 5 depicts several examples of easing functions including elastic, normalized Bezier, quantic, quartic, cubic, quadratic, sin, and linear. Each of these functions may be used to adjust the interpolation so that a smooth transition is provided.”);
(i) rendering, using a shader, an animated sequence of the data frames including the beginning frame, the at least one additional frame of data, and the end frame to the client device ([Page 15; Column 8, Lines 53-60] reciting “Windowing is the process of selecting some segment of the total value range and then displaying the values within that segment over the full range. In a simple example, an image ranges from full white to full black over a range of 1 to 100 using different shades of grey. If the windowing is set to 1 to 25, then the image will be rendered with shades of full white to full black over just 1 to 25 instead of from 1 to 100. Anything in the original image over 25 is rendered black.”); and
(j) repeating (g) through (i) in a loop ([Page 17; Column 12, Lines 16-21] reciting “Acts A221, A223, and A225 describe the windowing compensated constrained spline interpolation. These acts may be repeated in order to generate interpolated frames for every time t of the sequence of frames in between a plurality of keyframes that have rendering parameters set by a user or automatically by an application.”; See above rejections).
5 Regarding claim 2, Petkov teaches the method of claim 1 (see claim 1 rejection above), further comprising generating, by the processor, a texture to store color data; and assigning, by the processor, an output color to the one additional frame referencing the texture ([Page 14; Column 5, Lines 16-20] reciting “For rendering and animating medical images, look-up tables (LUT) may be used to store preset values for the parameters such as the colors, contrast, textures, or others. The look-up tables may include tens, hundreds, or thousands of parameters and the associated preset values.”).
6 Regarding claim 9, Petkov teaches a non-transitory computer-readable medium with instructions stored thereon ([Page 18; Column 13, Lines 43-48] reciting “The memory 24 or other memory is alternatively or additionally a non-transitory computer readable storage medium storing data representing instructions executable by the programmed image processor 25 for creating smooth transitions between look-up tables for a volume rendering system 200.”), that when executed by a processor, performs the steps of claim 1 (see claim 1 rejection above).
7 Regarding claim 10, Petkov teaches a system generating smooth transitions between animated image frames by calculating missing data between the frames ([Abstract] reciting “Systems and methods are provided for generating smooth transitions between volume rendering presets when the volume rendering is used as part of an animation system.”; [Page 16; Column 9, Lines 11-20] reciting “The windowing reference may be a windowing setting that is set by the user or application or may be calculated as a function of the two keyframes. For example, the windowing reference may be the average windowing settings of the two keyframes. In an example, if the first keyframe uses windowing parameters to view the heart and the second keyframe uses windowing parameters to view the lungs, the windowing reference values may be an average or in between windowing settings of the two windowing parameters.”), comprising:
client device having at least a processor, a memory storage device, and a user interface display ([Page 13; Column 3, Lines 33-38] reciting “The system includes a medical imaging device, a memory, an image processor, and a display. The medical imaging device is configured to acquire medical imaging data. The memory is configured to store rendering presets for a plurality of keyframes in a sequence of keyframes.”); and
a data source ([Page 17; Column 12, Lines 48-55] reciting “Alternatively, data-driven easing may use a look-up table, with values derived from measurements of the rendered image. For example, a voxel fade effect may have a non-linear response to the time parameter in the animation system 100, after multiple voxels are projected along a viewing ray. A look-up table easing function may be used to enforce a linear transition in the rendered images.”);
wherein the client device is operative to perform the steps of claim 1 (see claim 1 rejection above).
Claim Rejections - 35 USC § 103
8 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
9 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10 Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petkov et al. (US 10692267 B1) in view of Hillesland et al. (US 20170178397 A1).
11 Regarding claim 3, Petkov teaches the method of claim 2 (see claims 1-2 rejections above), but does not explicitly teach further comprising a second shader, wherein the second shader interpolates the frame data based on location and renders the animated sequence referencing the texture.
12 Hillesland teaches a second shader, wherein the second shader interpolates the frame data based on location and renders the animated sequence referencing the texture ([0015] reciting “FIGS. 1-6 illustrate real-time rendering of 3-D graphics that includes shading texels associated with a primitive that represents a portion of an object in a 3-D scene, caching information representing the shaded texels, and bypassing shading of previously cached texels associated with the primitive… The shader can use the triangle index for each texel to retrieve the triangle vertices that are used to interpolate values at the vertices to the location of the texel. Examples of the values that may be interpolated include values of a color, a position of the texel, a normal to the texel, a tangent to the texel, or any other texture coordinate including user-defined texture coordinates.”).
13 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov) to incorporate the teachings of Hillesland to provide a method that cab get shader data based on the location and renders of an animated sequence, using the animations and interpolation methods provided by the teachings of Petkov. Doing so would determine values for more than one pixel as stated by Hillesland ([0015] recited).
14 Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petkov et al. (US 10692267 B1) in view of Hillesland et al. (US 20170178397 A1) as of claims 1-3, further in view of Mitchell et al. (US 20180322691 A1).
15 Regarding claim 4, Petkov in view of Hillesland teaches the method of claim 3 (see claims 1-3 rejections above), but does not explicitly teach wherein the interpolating is performed based on the location before the temporal range.
16 Mitchell teaches wherein the interpolating is performed based on the location before the temporal range ([0016] reciting “In such a case, the temporal color compression may include determining a smallest selection of keyframes from the rendered video frames that can be used to derive the remaining frames on a per-cell basis, where each video frame is partitioned into a regular grid with multiple grid cells and the determined keyframe cells are stored along with an array of per-frame parameter values used to interpolate the closest keyframes forward and backward in time. The temporal depth compression may include storing each of the depth frames as either a keyframe with all frame data or a P-frame that only encodes differences to the last keyframe which are determined as axis-aligned bounding boxes.”; [0032] reciting “Such an objective function may be evaluated for cameras placed in different locations (e.g., randomly) in a brute-force manner, through a non-linear search, or in any other suitable manner.”; [0035] reciting “where I.sub.q is an indicator function that returns 1 only if the reconstruction quality for a range of frames is for all frames in a given range, above a threshold Q. Intuitively, the keyframes may be determined as those frames that in-between frames do not differ from much, and the in-between frames may each be represented by a scalar value t.sub.j specifying how to interpolate between keyframes that are immediately before and after the in-between frame. As described, the scalar values may be, e.g., between 0 and 1 and determined using an optimization process that minimizes a reconstruction error between interpolated and reference frames.”).
17 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov in view of Hillesland) to incorporate the teachings of Mitchell to provide a method that can interpolate based on the location (specifically before), utilizing the interpolation methods for frames based on the teachings of Petkov in view of Hillesland. Doing so would allow spatial compression using hardware-accelerated block-compression texture formats as stated by Mitchell ([0016] recited).
18 Regarding claim 5, Petkov in view of Hillesland teaches the method of claim 3 (see claims 1-3 rejections above), but does not explicitly teach wherein the interpolating is performed on the temporal range before the location.
19 Mitchell teaches wherein the interpolating is performed on the temporal range before the location. ([0016] reciting “In such a case, the temporal color compression may include determining a smallest selection of keyframes from the rendered video frames that can be used to derive the remaining frames on a per-cell basis, where each video frame is partitioned into a regular grid with multiple grid cells and the determined keyframe cells are stored along with an array of per-frame parameter values used to interpolate the closest keyframes forward and backward in time. The temporal depth compression may include storing each of the depth frames as either a keyframe with all frame data or a P-frame that only encodes differences to the last keyframe which are determined as axis-aligned bounding boxes.”; [0032] reciting “Such an objective function may be evaluated for cameras placed in different locations (e.g., randomly) in a brute-force manner, through a non-linear search, or in any other suitable manner.”; [0035] reciting “where I.sub.q is an indicator function that returns 1 only if the reconstruction quality for a range of frames is for all frames in a given range, above a threshold Q. Intuitively, the keyframes may be determined as those frames that in-between frames do not differ from much, and the in-between frames may each be represented by a scalar value t.sub.j specifying how to interpolate between keyframes that are immediately before and after the in-between frame. As described, the scalar values may be, e.g., between 0 and 1 and determined using an optimization process that minimizes a reconstruction error between interpolated and reference frames.”).
20 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov in view of Hillesland) to incorporate the teachings of Mitchell to provide a method that can interpolate based on the location (specifically after), utilizing the interpolation methods for frames based on the teachings of Petkov in view of Hillesland. Doing so would allow spatial compression using hardware-accelerated block-compression texture formats as stated by Mitchell ([0016] recited).
21 Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petkov et al. (US 10692267 B1) in view of Rivard et al. (US 9721375 B1).
22 Regarding claim 6, Petkov teaches the method of claim 1 (see claim 1 rejection above), but does not explicitly teach further comprising calculating, by the processor, a zoom level for image data for the viewable area before the requesting the frame data from the data source.
23 Rivard teaches calculating, by the processor, a zoom level for image data for the viewable area before the requesting the frame data from the data source ([Page 26; Column 8, Lines 12-21] reciting “In one embodiment, the current animation sequence defines a sequence of frames, as discussed in greater detail below in FIG. 3E. In one embodiment, a given animation sequence (e.g. the current animation sequence) may be completed before a subsequent animation sequence is initiated. The current animation state may define any combination of a current zoom level, a current scroll position, and a current rotation angle for a collection of representative images being animated in the current animation sequence.”).
24 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov) to incorporate the teachings of Rivard to provide a zoom level for the image and frame data that is provided by the teachings of Petkov. Doing so would observe representative images rotating in their proper place as stated by Rivard ([Page 26; Column 8, Lines 27-28] recited).
25 Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petkov et al. (US 10692267 B1) in view of Kim et al. (US 20180052652 A1).
26 Regarding claim 7, Petkov teaches the method of claim 1 (see claim 1 rejection above), wherein the step of interpolating utilizes a calculation method selected from the group consisting of: Catmull-Rom Spline Interpolation ([Page 15; Column 7, Lines 38-40] reciting “Many variants may be used including the cubic spline, the natural cubic spline and the Catmull-Rom spline.”; [Page 16; Column 9, Lines 59-65] reciting “In some embodiments, the look-up table is constructed automatically based on a perceptual or non-perceptual image metric. FIG. 5 depicts several examples of easing functions including elastic, normalized Bezier, quantic, quartic, cubic, quadratic, sin, and linear. Each of these functions may be used to adjust the interpolation so that a smooth transition is provided.”),
27 Petkov does not explicitly teach wherein the step of interpolating utilizes a calculation method selected from the group consisting of: Catmull-Rom Spline Interpolation, B-Spline Interpolation, Lagrange Interpolation, Hermite Interpolation, and any combination thereof.
28 Kim teaches wherein the step of interpolating utilizes a calculation method selected from the group consisting of: Catmull-Rom Spline Interpolation, B-Spline Interpolation, Lagrange Interpolation, Hermite Interpolation, and any combination thereof ([0007] reciting “The controller may perform control to generate a third frame corresponding to an order of the first frame in the video signal based on the second frame by image interpolation, and may perform control to display an image based on a video signal where the second frame is combined with the third frame instead of the first frame.”; [0130] reciting “As an example of the interpolation, there are a polynomial interpolation such as Lagrange interpolation, Newton interpolation and Hermite interpolation, which obtains only one polynomial expression connecting all data points throughout a range; and spline interpolation, e.g., piecewise polynomial interpolation such as a third spline interpolation and B-spline interpolation, etc.”).
29 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov) to incorporate the teachings of Kim to provide a method to include other interpolation methods to go alongside with the Catmull-Rom Spline Interpolation methods taught by Petkov. Doing so would allow a method of obtaining a polynomial expression of connecting all previously given data points as stated by Kim ([0130] recited).
30 Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Petkov et al. (US 10692267 B1) in view of Binion et al. (US 20150112545 A1).
31 Regarding claim 8, Petkov teaches the method of claim 1 (see claim 1 rejection above), but does not explicitly teach further comprising listening, by the data source, for the request.
32 Binion teaches listening, by the data source, for the request ([0029] reciting “In some embodiments, data collection unit 50 also takes advantage of other sources, external to the vehicle 12, to collect information about the environment. The use of such sources allows the data collection unit 50 to collect information that may be hidden from sensors…The V2I data may be indicative of traffic control information (e.g., speed limits, traffic light states, etc.), objects or conditions sensed by the stations, or may provide any other suitable type of information (e.g., weather conditions, traffic density, etc.)…The onboard system 14 may receive V2X data simply by listening/scanning for the data, or may receive the data in response to a wireless request sent by the onboard system 14, for example.”).
33 It would have been obvious to one with ordinary skill before the effective filing date of the claimed invention, to have modified the method (taught by Petkov) to incorporate the teachings of Binion to provide a method that listens for request for the various information that is provided by the teachings of Petkov. Doing so would allow the methods to receive information about external objects and/or conditions via wireless signals sent by any capable type of external object or entity, such as a pedestrian, cyclist or driver operating a smartphone, another vehicle, an infrastructure element (e.g., a roadside wireless station), a commercial or residential location (e.g., a locale maintaining a WiFi access point), and more as stated by Binion ([0029] recited).
Conclusion
34 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNNY TRAN LE whose telephone number is (571)272-5680. The examiner can normally be reached Mon-Thu: 7:30am-5pm; First Fridays Off; Second Fridays: 7:30am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOHNNY T LE/Examiner, Art Unit 2614
/KENT W CHANG/Supervisory Patent Examiner, Art Unit 2614