Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 recites the limitation "a decompressed image" in b. ln 2 and ln 3. It is not clear which one is referred to for the following the decompressed image. Applicant should consider differentiating these two decompressed images to be the first, the second decompressed image if it is proper to claim so.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Almehio et al. US20230144346A1 “Almehio”, in view of Li et al. US 20170064330 A 1“LI”
Regarding claim 1, ALMEHIO discloses a method for projecting a dynamic lighting beam using a lighting system (ALMEHIO, abstract, i.e., the invention provides a method for managing image data in a motor vehicle lighting system, the lighting system including at least one lighting module intended to project light beams, the light beams being generated from data relating to the selection of at least one image, each image being respectively defined by a matrix including a plurality of horizontal or vertical rows of pixels, with each pixel having a numerical value related to a light intensity of the pixel. The method includes determining whether the pixel under analysis is considered to be a significant point of inflection of the image, so as to transmit it to at least one lighting module, so that it is able to project a resulting image) of a motor vehicle (ALMEHIO, abstract), the lighting system including a memory (ALMEHIO, ¶ 47), a control unit (ALMEHIO, ¶ 5) and a lighting module (ALMEHIO, ¶ 5).
It is note that ALMEHIO is silent about reading each image from the compressed video stored in the memory; decompressing a decompressed image, for each image read with the control unit, using a dictionary-based decompression algorithm to give a decompressed image, each data sequence of the read image being decompressed either in a first mode in which the decompressed image has added to the decompressed image a copy of the sequence, or in a second mode in which the decompressed image has added to the decompressed image a data sequence of the previously read image added to the decompressed image, or in a third mode in which the decompressed image has added to the decompressed image a data sequence of a previously decompressed image; projecting, by the lighting module, a pixelated light beam, determined based on each decompressed image as claimed.
However, LI disclose memory (LI, ¶ 94) storing a compressed video (LI, as cited below, a video to be compressed) including a plurality of successive images (LI, as cited below, i.e. compressed images) each consisting of a plurality of data sequences (LI, as cited below i.e. data sequence to be added to the compressed images) the method comprising:
a. reading each image (LI, ¶ 32, i.e. video input device) from the compressed video stored in the memory (LI, ¶ [0094] For the various dictionary coding modes described herein, the decoder can decode current pixel values in a matching mode and/or a direct mode. In matching mode, the decoder decodes current pixel values that are predicted from previously decoded pixel values (e.g., previously reconstructed pixel values) which may be stored in a 1-D dictionary or in another location (e.g., a reconstructed picture). For example, the decoder can receive one or more codes indicating an offset (e.g., within a dictionary) and a length (indicating a number of pixel values to be predicted from the offset). In direct mode, the decoder can decode pixel values directly without prediction);
b. decompressing a decompressed image, for each image read with the control unit, using a dictionary-based decompression algorithm to give a decompressed image, each data sequence of the read image being decompressed either in a first mode in which the decompressed image has added to the decompressed image a copy of the sequence (LI, ¶ 94, i.e. direct mode without prediction), or in a second mode in which the decompressed image has added to the decompressed image a data sequence of the previously read image added to the decompressed image (LI, as cited above, i.e. matching modes, and LI, as cited below, ¶ 103, i.e. reconstructed sample values), or in a third mode in which the decompressed image has added to the decompressed image a data sequence of a previously decompressed image (LI, as cited above, i.e. matching modes. ¶ [0103] In 1-D dictionary mode, sample values (e.g., pixel values) are predicted by reference (using offset and length) to previously sample values stored in a 1-D dictionary (e.g., previously reconstructed sample values). For example, a video encode or image encoder can encode current sample values with reference to a 1-D dictionary storing previous sample values (e.g., reconstructed or original sample values) that are used to predict and encode the current sample values. A video decoder or image decoder can decode current sample values with reference to a 1-D dictionary storing previously decoded (e.g., reconstructed) sample values that are used to predict and decode the current sample values.);
c. projecting, by the lighting module, a pixelated light beam (LI, ¶ 34), determined based on each decompressed image (LI, ¶ [0068] An output sequencer (480) uses the MMCO/RPS information (432) to identify when the next frame to be produced in output order is available in the decoded frame storage area (460). When the next frame (481) to be produced in output order is available in the decoded frame storage area (460), it is read by the output sequencer (480) and output to the output destination (490) (e.g., display). In general, the order in which frames are output from the decoded frame storage area (460) by the output sequencer (480) may differ from the order in which the frames are decoded by the decoder (450)).
Both ALMEHIO and LI teach systems with imaging display with compressed images, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the ALMEHIO disclosure, compressing images with different modes, as taught by LI. Such inclusion would have increased the usefulness of the system by reducing bit rate of a video, and would have been consistent with the rationale of combining prior art elements according to known methods to yield predictable results to show a prima facie case of obviousness (MPEP 2143(I)(A)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding claim 2, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 1 (ALMEHIO, abstract), wherein each data sequence of the read image includes a header (LI, [0138] In some implementations, an escape code or flag is used to indicate when direct mode is used for a pixel value. For example, an encoder can place the escape code or flag in the bitstream with the directly encoded pixel value so that the decoder knows that the pixel value is encoded using direct mode. In this way, the decoder can distinguish between pixel values encoded in direct mode and pixel values encoded using matching mode. In addition, coding in the 1-D dictionary mode can support switching between matching mode and direct mode as needed (e.g., on a pixel-by-pixel basis).) containing a decompression code (as cited below, i.e. ¶ 103) and decompressing a decompressed image (LI, ¶ 109) includes each data sequence of the read image being decompressed in the first, the second or the third mode (LI, ¶ 103) depending on the decompression code contained in the header of the sequence (LI, ¶ 94)
Regarding claim 3, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 2, wherein, when the header of the data sequence of the read image includes a decompression code (LI, ¶ 94, direct mode/matching mode) indicating the first mode, the sequence includes a code for a number N1 and decompressing a decompressed image includes the data sequence of the read image being decompressed in the first mode by adding, to the decompressed image, the N1 data blocks following the code for the number N1 (LI, ¶ 94, i.e. direct mode).
Regarding claim 4, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 2, wherein, when the header of the data sequence of the read image includes a decompression code indicating the second or the third mode (LI, ¶ 94, matching mode), the sequence includes an original position code (LI, ¶ 109) and a code for a length L (LI, ¶ 109, offset and length) and decompressing a decompressed image includes the data sequence of the read image being decompressed in the second or the third mode by adding, to the decompressed image, the L data added to the decompressed image or to a previously decompressed image from the original position or up to the original position (LI, ¶ 103, i.e. the decompressed image or to a previously decompressed image from the original position or up to the original position ).
Regarding claim 5, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 4, wherein, when the header of the data sequence of the read image includes a decompression code indicating the second or the third mode (LI, ¶ 94, i.e. matching mode), the header and the code for the original position of the sequence together form a predetermined number N2 of data blocks, and decompressing a decompressed image includes the original position being obtained from all of the remaining data of the N2 data blocks from the header, which form the code for the original position (LI, ¶ 103, i.e. matching mode).
Regarding claim 6, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 4, wherein, when the header of the data sequence of the read image includes a decompression code indicating the second or the third mode (LI, ¶ 94, matching mode), decompressing a decompressed image includes the length L being obtained from the value of the decompression code, which forms or forms part of the code for the length L (LI, ¶ 94, offset and length).
Regarding claim 7, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 6, wherein, when the header of the data sequence of the read image includes a decompression code indicating the second or the third mode (LI, ¶ 94), decompressing a decompressed image includes the length L being obtained by adding the value of the decompression code and of each of the data blocks following the code for the original position until one of these blocks contains a datum equal to a predetermined value, the set of the blocks forming the code for the length L (LI, ¶109, as cited above, i.e. offset and length).
Regarding claim 8, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 4, wherein, when the header of the data sequence of the read image includes a decompression code indicating the second or the third mode (LI, ¶ 94), the header includes a literal copy code (LI, ¶ 109) indicating the presence or the absence of a last block in the sequence and decompressing a decompressed image includes if the literal copy code indicates the presence of a last block in the sequence, the last block of the sequence is added to the decompressed image at the end of the L added data (LI, ¶ 94, i.e. direct copy mode or matching mode).
Regarding claim 9, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 4, wherein, when the header of the data sequence of the read image includes a decompression code (LI, ¶ 94) indicating the second or the third mode (LI, ¶ 103), the header includes a target code indicating the second or the third mode and decompressing a decompressed image includes the data sequence of the read image being decompressed in the second mode, if the target code has a first value (LI, ¶ 109), by adding, to the decompressed image, the L data added to the decompressed image from the original position (LI, ¶ 103) or in the third mode, if the target code has a second value, by adding, to the decompressed image, the L data added to a previously decompressed image (LI, ¶ 138) from the original position or up to the original position (LI, ¶ 94).
Regarding claim 10, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 9, wherein, when the header of the data sequence of the read image includes a target code indicating the third mode (LI, ¶ 103), the header includes a reading direction code indicating a reading direction (LI, ¶ 138) of the data to be added to the decompressed image and decompressing a decompressed image includes the data sequence of the read image being decompressed in the third mode by adding, to the decompressed image, the L data added to a previously decompressed image from the original position if the reading direction code has a first value or up to the original position if the reading direction code has another value (LI, para 138).
Regarding claim 11, ALMEHIO/LI, for the same motivation of combination, further discloses the method for projecting a dynamic lighting beam using a lighting system of a motor vehicle according to claim 10, wherein, for each decompressed image, the pixelated light beam is determined based on the sum of this decompressed image and all of the previously decompressed images (LI, ¶ 103).
Regarding claim 12, ALMEHIO/LI, for the same motivation of combination, discloses a method for compressing an initial video, implemented by a computing system, the method comprising:
a. reading each image from the initial video (LI, ¶ 32);
b. compressing each read image to give a compressed image (LI, ¶ 55, … If the MMCO/RPS information (342) indicates that a coded frame (341) needs to be stored, the decoding process emulator (350) models the decoding process that would be conducted by a decoder that receives the coded frame (341) and produces a corresponding decoded frame (351). In doing so, when the encoder (340) has used decoded frame(s) (369) that have been stored in the decoded frame storage area (360), the decoding process emulator (350) also uses the decoded frame(s) (369) from the storage area (360) as part of the decoding process.), the compressing each read image includes, for each read datum from the read image, called current datum (LI, ¶ 51):
i. reading a datum from the read image, called current datum (LI, ¶ 55);
ii. selecting a first data sequence of the read image, from among a set of data sequences of the read image beginning with this current datum, and a second data sequence, from among a set of preceding data sequences of the read image and a set of data sequences of a preceding image that together maximize a data sequence similarity function (LI, ¶ [0052] For the various dictionary coding modes described herein, the encoder can calculate hash values of previously reconstructed sample values (e.g., groupings of 1 pixel, 2 pixels, 4 pixels, 8 pixels, and so on) and compare those has values for a hash value of a current pixel value being encoded. Matches of length one or more can be identified in the previously reconstructed sample values based on the hash comparison and the current pixel value (or values) can be encoded using the various 1-D and pseudo 2-D dictionary modes described herein (or the inter pseudo 2-D dictionary mode with reference to a reference picture).);
iii. adding, to the compressed image depending on the length of the selected first data sequences, a third data sequence including the current datum or a compressed sequence determined based on the original position and on the length of the selected second data sequence (LI, ¶ [0101] In particular, using the 1-D dictionary mode when encoding pixel values can improve performance and reduce needed bits when encoding video content, particularly screen content (e.g., when performing screen capture). Screen content typically includes repeated structures (e.g., graphics, text characters), which provide areas with the same pixel values that can be encoded with prediction to improve performance.);
each read datum from the read image being the first datum of the read image, and then each first datum of the read image situated after the last datum of the selected first data sequence (LI, ¶ 109);
c. storing each compressed image in a memory of the computing system so as to form a compressed video (LI, ¶ [0058] The aggregated data (371) from the temporary coded data area (370) are processed by a channel encoder (380). The channel encoder (380) can packetize the aggregated data for transmission as a media stream (e.g., according to a media stream multiplexing format such as ISO/IEC 13818-1), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media transmission stream. Or, the channel encoder (380) can organize the aggregated data for storage as a file (e.g., according to a media container format such as ISO/IEC 14496-12), in which case the channel encoder (380) can add syntax elements as part of the syntax of the media storage file. Or, more generally, the channel encoder (380) can implement one or more media system multiplexing protocols or transport protocols, in which case the channel encoder (380) can add syntax elements as part of the syntax of the protocol(s). The channel encoder (380) provides output to a channel (390), which represents storage, a communications connection, or another channel for the output).
Regarding claim 13, ALMEHIO/LI, for the same motivation of combination, further discloses the method for compressing an initial video, implemented by a computing system, according to claim 12, wherein, for two data sequences, the similarity function (LI, ¶ 177) of these data sequences is determined on the basis of the length of the data sequences; of the difference between two corresponding data of these data sequences, and of a predetermined tolerance threshold for the difference (LI, ¶, [0172] Matching is then performed to see if a pixel value (or pixel values) in the hash matches the current pixel value (or current pixel values) being encoded. First, a check is made to match every 1 pixel value using the hashed pixel values (e.g., by creating a hash of 1 current pixel value and comparing it to hashes of previous 1 pixel values in a dictionary). If a 1 pixel match is found, an encoder can check how many pixels can match from the current pixel to determine the length (the number of pixels that match from the current pixel). If a matching length of 2 is found (e.g., if a current pixel value matches a pixel value in the dictionary at a specific offset with length 2), then matching can proceed with 2 pixels and above (e.g., pixel values at other offsets in the dictionary with a length of 2, or more, may match the current pixel) without the need to check hashes of 1 pixel anymore for the current pixel. Similarly, if a matching length of 4 is found, then hash checking begins with 4 pixels and above, and similarly with 8 pixels. In some implementations, hash search is implemented with 1, 2, 4, and 8 pixels. In other implementations, hash search can use greater or fewer pixels.).
Regarding claim 14, ALMEHIO/LI, for the same motivation of combination, further discloses the method for compressing an initial video, implemented by a computing system, according to claim 12, wherein the set of preceding data sequences of the read image, in which the second data sequence is sought, consists of all of the data sequences of the read image beginning with a preceding datum of the image whose position is spaced from the position of the current datum by at most a first predetermined distance (LI, ¶ [0110] In the 1-D dictionary mode, in some implementations pixel values in a current block can be predicted from pixel values in a previous block (e.g., depending on the maximum size of the dictionary). For example, in a picture coded using 64×64 blocks, pixel values from the fourth block in the picture can be predicted (e.g., using offset and length) from pixel values from the first block in the picture that are stored in a 1-D dictionary.) and in that the set of data sequences of a preceding image, in which the second data sequence is sought, consists of all of the data sequences of the preceding image beginning with a datum whose position is spaced from the position of the current datum by at most a second predetermined distance (LI, ¶ [0123] In some implementations, the offset and length can overlap the current pixel value being encoded/decoded. As an example, consider pixel values [P-2, P-1, P0, P1, P2, P3] where P-2 and P-1 are the last two pixel values in the 1-D dictionary, P0 is the current pixel value being encoded/decoded, and P1 through P3 are the next pixel values to be encoded/decoded. In this situation, an offset of 1 and a length of 3 (un-encoded offset and length values) is a valid condition in which P0 is predicted from P-1, P1 is predicted from P0, and P2 is predicted from P1. As should be understood, an offset of 1 (un-encoded value, which would be 0 when encoded) means one position back from the current pixel value into the 1-D dictionary (e.g., in some implementations a negative sign is added to the offset, which would be an offset of −1 in this example).).
Regarding claim 15, ALMEHIO/LI, for the same motivation of combination, further discloses the method for compressing an initial video, implemented by a computing system, according to claim 12, wherein for each current datum, the third data sequence includes a header (LI, ¶ 138), and the header (LI, ¶ 84) includes a decompression code indicating whether the third data sequence includes the current datum or the compressed sequence (LI, ¶ 94).
Regarding claim 16, ALMEHIO/LI, for the same motivation of combination, further discloses the method for compressing an initial video, implemented by a computing system, according to claim 15, wherein, if the decompression code indicates that the third data sequence includes the compressed sequence, the third data sequence includes a code for an original position of the selected second data sequence and a code for a length L of the selected second data sequence (LI, ¶ 157).
Regarding claim 17, ALMEHIO/LI, for the same motivation of combination, further discloses the method for compressing an initial video, implemented by a computing system, according to claim 15, wherein compressing each read image includes, for each current datum, depending on the length of the selected first data sequence:
a. adding, to the compressed image, a third data sequence including the current datum (LI, ¶ 137) or
b. adding, to the compressed image, a third data sequence including a compressed sequence determined based on the original position and on the length of the selected second data sequence (LI, ¶ 141 matching) or
c. adding the current datum to a third data sequence previously added to the compressed image (LI, ¶ 144).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
US 20170070238 A1 METHOD FOR LOSSLESS DATA COMPRESSION / DEPRESSION AND DEVICE THEREOF
US 9549048 B1 Transferring compressed packet data over a network
US 20150381201 A1 SYSTEM AND METHOD FOR DICTIONARY-BASED CACHE-LINE LEVEL CODE COMPRESSION FOR ON-CHIP MEMORIES USING GRADUAL BIT REMOVAL
US 20150242122 A1 METHOD FOR WRTITING DATA, MEMORY STORAGE DEVICE AND MEMORY CONTROL CIRCUIT UNIT
US 20150195225 A1 COMPRESSING AND DECOMPRESSING ELECTRONIC MESSAGES IN MESSAGE THREADS
US 20150133120 A1 METHOD, SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM FOR TRANSMITTING CONTENTS WITH SUPPORTING HAND-OVER
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANK F HUANG whose telephone number is (571)272-0701. The examiner can normally be reached Monday-Friday, 8:30 am - 6:00 pm (Eastern Time), Federal Alternative First Friday Off.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at (571)272-2988.. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FRANK F HUANG/Primary Examiner, Art Unit 2485