Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claims 1-8,12-14, and 16-24 are pending.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, 12,14, 16-22, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Lui et al (US Pub. No. 2022/0261634), hereafter, “Lui,” in view of Said (US Pub. No. 2022/0124376) and Kim et al (US Pub. No. 20220029638), hereafter, “Kim.”
As to claim 1, Lui discloses a method implemented by a wireless transmit/receive unit (WTRU) for wireless communications (Abstract; wireless devices may be used, see at least [0500]), the method comprising:
transmitting, to an Edge device ([0500], discloses the types of devices that may be used), a first message including prediction values associated with a neural network and information indicating a type of the prediction values associated with the neural network, wherein the prediction values were marshaled into one or more byte arrays before transmission ([0137], particularly, “In the hidden layer, the second hidden layer obtains a predicted intermediate result value from the first hidden layer to perform a computation operation and an activation operation, and then sends the obtained predicted intermediate result value to the next hidden layer. The same operations are performed in the following layers to obtain the output value in the output layer in the neural network,” with [0150], particularly, “In the inference process, the data to be quantized includes at least one type of neurons, weights, and biases of the neural network. If the data to be quantized are the weights, the data to be quantized may be all or part of the weights of a certain layer in the neural network. If the certain layer is a convolution layer, the data to be quantized may be all or part of the weights with a channel as a unit in the convolution layer, in which the channel refers to all or part of the channels of the convolution layer. It should be noted that only the convolution layer has a concept of channels. In the convolution layer, only the layered weights are quantized in a channel manner,” disclosing “information indicating a type of the prediction values associated with the neural network” and [0141] disclosing how the data is formatted for the neural network, i.e. in 8-bit fixed-point numbers or in other words, sequences or arrays of “bytes,” 8 bits equaling 1 byte);
receiving, from the Edge device, a second message based on the prediction values and the type of the prediction values ([0137], particularly, “In the hidden layer, the second hidden layer obtains a predicted intermediate result value from the first hidden layer to perform a computation operation and an activation operation, and then sends the obtained predicted intermediate result value to the next hidden layer. The same operations are performed in the following layers to obtain the output value in the output layer in the neural network,”; “the following layers” receiving “a second message”).
However, Lui does not explicitly the disclose the second message including video data.
But, Said discloses receiving, from an Edge device, a second message including video data based on prediction values and a type of the prediction values ([0021], “In some aspects, the encoded video data comprises one or more syntax elements of a video bitstream. In some aspects, the one or more syntax elements are indicative of one or more parameters defining a neural network for decoding the encoded video data. In some aspects, the one or more parameters defining the neural network comprise at least one of weights of the neural network and an activation function of the neural network. “see further [0069] discussing predications and types).
Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Lui and Said in order to extend Lui’s system to a broader variety of data so as to increase its potential market.
However, Lui and Said do not explicitly disclose receiving, from the Edge device, a first acknowledgement message indicating the type of the prediction values associated with the neural network that the Edge device has received; and transmitting, to the Edge device, a second acknowledgement message indicating that the video data was received.
But, Kim discloses receiving, from the Edge device, a first acknowledgement message indicating values associated with the neural network that the Edge device has received ([0013]; particularly, “The method may include: transmitting data including a plurality of information blocks, wherein each of the plurality of information blocks may include a corresponding cyclic redundancy check (CRC); receiving a hybrid automatic repeat request acknowledgement/negative acknowledgement (HARQ ACK/NACK) for the transmitted data; learning to retransmit the plurality of information blocks; and retransmitting the plurality of information blocks based on the HARQ ACK/NACK.” With [0161]-[0169], discloses its operation within the context of a neural network);
transmitting, to the Edge device, a second acknowledgement message indicating that the data was received ([0013]; particularly, “The method may include: transmitting data including a plurality of information blocks, wherein each of the plurality of information blocks may include a corresponding cyclic redundancy check (CRC); receiving a hybrid automatic repeat request acknowledgement/negative acknowledgement (HARQ ACK/NACK) for the transmitted data; learning to retransmit the plurality of information blocks; and retransmitting the plurality of information blocks based on the HARQ ACK/NACK.” With [0161]-[0169], discloses its operation within the context of a neural network).
Therefore it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the application to combine the teachings of Lui and Said with Kim in order utilize a known and reliable method of ensuring data is received at the entities to which it was transmitted.
As to claim 14, it is rejected by a similar rationale by that set forth in claim 1’s rejection.
As to claims 2 and 16, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose receiving a third message including a different type of prediction values associated with the neural network (Lui, [0137] and [0150]).
As to claims 3 and 17, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose unmarshalling the video data received in the second message; and decoding the unmarshalled video data (Lui, [0137] and [0150] and Said, [0021] and [0069]).
As to claims 4 and 18, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose the second message comprises one or more parameters indicating the type of the prediction values (Lui, [0137] and [0150]).
As to claims 5 and 19, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose the one or more parameters are selected by a decision policy (Lui, [0137] and [0150]).
As to claims 6 and 20, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose the one or more parameters are received as a byte array (Lui, [0137], [0141], and [0150]).
As to claims 7 and 21, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose the type of the prediction values associated with the neural network comprises a point estimate or a probability distribution (Lui, [0122]).
As to claims 8 and 22, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose the different type of prediction values associated with the neural network comprises a point estimate or a probability distribution and is different from the type of the prediction values associated with the neural network (Lui, [0122]).
As to claims 13 and 24, the teachings of Lui, Said, and Kim as combined for the same reasons set forth in claim 1’s rejection and further disclose selecting one or more following procedures: (1) using low-quality video rendering and low-quality prediction; (2) using low-quality video rendering and high-quality prediction; (3) using low-quality prediction and high-quality video rendering; or (4) using high-quality video rendering and high-quality prediction (Lui, [0021] and [0069]).
Allowable Subject Matter
Claims 12 and 23 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J DAILEY whose telephone number is (571)270-1246. The examiner can normally be reached 9:30am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached on 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/THOMAS J DAILEY/ Primary Examiner, Art Unit 2458