DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: METHOD, TERMINAL, AND MEDIUM FOR COMPRESSION OF FIRST INFORMATION
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
None of the instant claims invoke U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-9 and 19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “large” in claims 5 and 19 is a relative term which renders the claim indefinite. The term “large” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Para. 0039 of the instant specification discloses: “a large-scale parameter, for example, collects statistics on large-scale distribution information of a key parameter corresponding to information that meets the first condition.” However, the term “large-scale parameter” is further illustrated by an example with “large-scale distribution” which is also relative by making use of the term “large.” Dependent claims 6-9 are rejected by virtue of their dependencies.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-17, 19-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by WO 2020/139179 A1 to TULLBERG et al. (“Tullberg”).
As to claims 1-3, 5, see similar rejections to claims 15-17, 19, respectively. The apparatus teaches the methods.
As to claim 6, Tullberg further discloses the method according to claim 5 (see rejection to claim 5), wherein the differential mode [Examiner notes that “differential mode” is a term written in the alternative in claim 5, therefore further limitations to limit it do not need to be given patentable weight] comprises at least one of the following: performing differentiation according to time; performing differentiation according to a position; and performing differentiation according to a target parameter, wherein the target parameter comprises at least one of: a tracking area (TA), a frequency, a public land mobile network, connection information, and a quality of service flow (QOS Flow).
As to claim 7, Tullberg further discloses the method according to claim 5 (see rejection to claim 5), wherein the compressing, by the terminal, the first information according to a category of the first information [Examiner notes that “according to a category” is a term written in the alternative in claim 5, therefore further limitations to limit it do not need to be given patentable weight] comprises: obtaining a category corresponding to each piece of the first information; and selecting first information corresponding to a target category as the compressed first information.
As to claim 8, Tullberg further discloses the method according to claim 5, wherein the compressing, by the terminal, the first information according to a priority of the first information comprises [Examiner notes that “a priority” is a term written in the alternative in claim 5, therefore further limitations to limit it do not need to be given patentable weight]: obtaining a priority corresponding to each piece of the first information; and selecting first information with a priority higher than a preset priority as the compressed first information.
As to claim 9, Tullberg further discloses the method according to claim 5, wherein an association relationship exists between at least two information types in the following information of the first information reported by the terminal: time information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data; claim 1: increasing the cluster counter by one for each normal data sample that is associated with the cluster, i.e. a counter pertaining to timing) ; measurement result information (page 15, lines 25-31, wireless device may collect the number of successive data samples by performing one or more measurements; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to measurement); page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data); position information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data; claim 1: updating the cluster centroid to correspond to a mean position of all normal data samples that are associated with the cluster); event information (page 15, lines 31-35, wireless device 120 may be triggered to collect the number of successive data samples (i.e. an association) by a communications event; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to event)); and channel characteristic information.
As to claim 10, Tullberg further discloses the method according to claim 1, wherein different second target information (fig. 15, 1506 as opposed to 1503) corresponds to different first conditions (page 32, lines 1-6, The thresholds may be determined either (i.e. can be different) from the original clusters with possible correlations between axes or the orthogonalized axes from the PCA without correlations. If for example GMMs are used to represent the training data, distance/similarity measures between distributions such as the Kullback-Leibler (KL) divergence may be useful for anomaly detection) or different preset compression manners, and the second target information comprises at least one of the following: a cell; a TA area; a frequency; a PLMN; connection information (page 24, lines 15-33, network node 100 (i.e. connection to wireless device) may…train the machine learning model (i.e. target) based on received (i.e. connected) compressed data (i.e. target information)); a QoS flow; and a bandwidth part (BWP).
As to claim 11, Tullberg further discloses the method according to claim 10 (see rejection to claim 10), further comprising: in a case of [per MPEP 2111.04, The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met; with respect to claim 11, this case is not required to be performed] changing the second target information, changing a corresponding first condition or preset compression manner.
As to claim 12, Tullberg further discloses the method according to claim 10, wherein each piece of the second target information corresponds to at least one first condition (page 16, lines 25-35, wireless device 120 successively creates the compressed data by…when a number of clusters has a reached a maximum number (i.e. condition)…merges one or more of the clusters into a merge cluster; the cluster as the compressed data) or corresponds to at least one preset compression manner (page 16, lines 5-25, wireless device 120 successively creates compressed data (i.e. compressing first information)…The actions performed by the wireless device 120 to create the compressed data will now be described (i.e. preset compression manner)).
As to claim 13, Tullberg further discloses the method according to claim 10, wherein configuration signaling of the first condition or the preset compression manner is the same as configuration signaling of the second target information (fig. 15, signaling of 1503 and 1506 are both Training data messages).
As to claim 14, Tullberg further discloses the method according to claim 1, wherein before the compressing, by a terminal, first information in a preset compression manner, the method further comprises: obtaining first indication information (fig. 2, page 15, lines 10-24, wireless device 120 collects a number of successive data samples), wherein the first indication information is used to indicate index information corresponding to at least one of: the first condition and the preset compression manner (page 16, lines 5-35, creates the compressed data, i.e. preset compression manner, by:…wireless device updates the cluster centroid (i.e. index)); and determining, according to the first indication information, at least one of: the first condition and the preset compression manner that correspond to the first indication information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data).
As to claim 15, Tullberg discloses a terminal (fig. 17, UE), comprising: a processor (fig. 17, pages 40, lines 4-20, processing circuitry), a memory (fig. 17, page 40, lines 4-20, software…stored in…UE), and a program or an instruction that is stored in the memory and that can run on the processor, wherein the program or instruction, when executed by the processor, causes the processor to perform (fig. 17, page 40, lines 4-20, software…stored in…UE…and executable by the processing circuitry): compressing first information in a preset compression manner (page 16, lines 5-25, wireless device 120 successively creates compressed data (i.e. compressing first information)…The actions performed by the wireless device 120 to create the compressed data will now be described (i.e. preset compression manner)), and reporting compressed first information (page 18, lines 25-35, wireless device 120 transmits, to the network node 110, the compressed data), wherein the first information comprises (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data) at least one of the following: first target information (page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); representation information of the first target information (page 21, lines 8-20, The wireless device 120 is configured to successively create compressed data (i.e. target information) by being configured to perform one or more of the following actions. The wireless device 120 is configured to associate each collected data sample to a cluster. The cluster has a cluster centroid, a cluster counter representative (i.e. representation information) of a number of collected data samples determined to be normal and being associated with the cluster, and a number of outlier collected data samples associated with the cluster); measurement information corresponding to the first target information (page 15, lines 25-31, wireless device may collect the number of successive data samples by performing one or more measurements; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to measurement)); neural network use information corresponding to the first target information (page 17, lines 13-14, neural networks; page 24, lines 15-35, adjusting…for one or more of the artificial neurons); information indicating whether the first target information meets a first condition (page 16, lines 25-35, wireless device 120 successively creates the compressed data by…when a number of clusters has a reached a maximum number (i.e. condition)…merges one or more of the clusters into a merge cluster; the cluster as the compressed data) at least one of the following: first target information (page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); and value information corresponding to the first target information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data, i.e. all are values; page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)), wherein the first target information comprises at least one of the following: position information of the terminal (page 5, lines 5-10, user is stationary for a while and then moves, there may be many inputs from a first cluster first, and then as the user moves to another location, from another cluster, and so on. This affects how to merge and split clusters, i.e. cluster pertains to position; page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data; page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); first measurement quantity information (page 15, lines 25-31, wireless device may collect the number (i.e. quantity) of successive data samples by performing one or more measurements; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to measurement)); first event information (page 15, lines 31-35, wireless device 120 may be triggered to collect the number of successive data samples by a communications event (page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to event)); first identifier information (pages 29, 21-25, The transmitted compressed data (i.e. target information) comprises…a list of outliers/anomalies (i.e. identifiers)); first transmission parameter information (page 15, lines 13-22, The wireless device 120 collects a number of successive data samples for training of the machine learning model comprised in the network node 1 10. The data samples may for example be sensor readings, such as temperature reading, or communication parameters, such as parameters of a communication link between the wireless device 120 and the network node 1 10. Some examples of such parameters are load, signal strength, signal quality, just to give some example. It should be understood that embodiments herein are not limited to compressing communication-related data but may be used for any kind of data. Examples of communication data may be beams, modulation and coding schemes, log-likelihood ratios which may computed when knowing the MCS and SNR before doing the channel decoding, and precoder matrix indices, just to mention some examples; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to parameters)) ; application layer configuration information (page 2, lines 10-13, machine learning…used in many different communication applications; page 24, lines 15-33, network node 100 may…train (i.e. configure) the machine learning model based on received compressed data (i.e. target information)); and first configuration information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data, i.e. all are configuration information to configure the machine learning; page 24, lines 15-33, network node 100 may…train the machine learning model based on received compressed data).
As to claim 16, Tullberg further discloses the terminal according to claim 15, wherein the representation information of the first information is indicated by information corresponding to a neutral network (page 17, lines 13-14, neural networks; page 24, lines 15-35, adjusting…for one or more of the artificial neurons; page 15, lines 13-22, The wireless device 120 collects a number of successive data samples for training of the machine learning model comprised in the network node 1 10).
As to claim 17, Tullberg further discloses the terminal according to claim 15, wherein the reporting compressed first information comprises: reporting the compressed first information according to a predefined cycle (page 16, line 5, wireless device 120 successively (i.e. cycle) creates compressed data); or triggering to report the compressed first information , according to a trigger condition (page 15, lines 31-35, wireless device 120 may be triggered to collect the number of successive data samples by a communications event (page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to event)), wherein the trigger condition comprises at least one of the following: report indication information sent by a network is received (page 15, lines 25-31, receiving the number of successive data sample (i.e. report indication information) from another device…a network node); a predefined event (page 15, lines 31-35, wireless device 120 may be triggered to collect the number of successive data samples by a communications event; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to event)); and an amount of the first information reaches a preset threshold (page 17, lines 1-5, each new data sample may be considered as a cluster centroid with an initial covariance matrix of zeroes until the memory is full, i.e. preset threshold).
As to claim 19, Tullberg further discloses the terminal according to claim 15, wherein the compressing first information in a preset compression manner comprises: compressing the first information based on a first neural network (page 17, lines 13-14, neural networks; page 24, lines 15-35, adjusting…for one or more of the artificial neurons; page 15, lines 13-22, The wireless device 120 collects a number of successive data samples for training of the machine learning model comprised in the network node 1 10); or compressing the first information based on a large-scale parameter (page 32, lines 9-31, Actions 201-203 [i.e. compressing]; each new data sample is considered as a cluster centroid…starting at large K values (i.e. large-scale parameter) and moving to the left) ; or compressing the first information by merging a plurality of parts of the first information (page 32, lines 9-31, Actions 201-203 [i.e. compressing]; each new data sample is considered as a cluster centroid…cluster (i.e. parts) merging is performed); or obtaining statistical characteristics by performing mathematical operation on a plurality of parts of the first information (pages 4, liens 10-22, An outlier is an observation point that is distant from other observations. Outliers (i.e. parts) may occur by chance in any distribution, and indicate either measurement error or that the population has a heavy-tailed distribution. In the former case one may discard them or use statistics that are robust (i.e. statistical characteristics) to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. In large samples, a small number of outliers is to be expected (and not due to any anomalous condition). The compressed data, such as the cluster centroids and cluster counters, and individual “outliers”, may be stored locally), and reporting the statistical characteristics (page 15, lines 13-22, The wireless device 120 collects a number of successive data samples for training of the machine learning model comprised in the network node 1 10); or compressing the first information in a differential mode; or compressing the first information according to a category of the first information (pages 35, lines 33 to page 36 line 14, In Action 1404, the wireless device 120 determines whether or not the sample is anomalous (i.e. category) for the selected cluster or all clusters. If the sample is determined to be anomalous, the wireless device 120 in Action 1405 stores the anomalous sample as it is since it’s an important raining example in its own right. If the sample is determined not to be anomalous, it belongs to one of the clusters. Thus, in Action 1406, the wireless device 120 adds the sample to best cluster and in Action 1407, the wireless device 120 updates the cluster counter by one for that cluster. Optionally, in Action 1408, the wireless device 120 may update cluster centroid location and cluster axes. The means may be updated as follows: n=n+1 , d=x-m, and m -m+d/h. The covariance update is given above. If PCA is performed it may be recomputed based on the updated covariance matrices when a current covariance matrix is sufficiently different compared to when it was used to compute the PCA. In Action 1409, the wireless device 120 determines whether or not it is time to transmit the compressed data (i.e. first information) to the network node 1 10.); or compressing the first information according to an association relationship between at least two pieces of first information (pages 5, lines 24-25, The wireless device successively creates compressed data by associating each collected data sample to a cluster); or compressing the first information according to a priority of the first information (pages 35, lines 33 to page 36 line 14, In Action 1404, the wireless device 120 determines whether or not the sample is anomalous for the selected cluster or all clusters. If the sample is determined to be anomalous, the wireless device 120 in Action 1405 stores the anomalous sample as it is since it’s an important (i.e. priority) training example in its own right. If the sample is determined not to be anomalous, it belongs to one of the clusters. Thus, in Action 1406, the wireless device 120 adds the sample to best cluster (i.e. priority) and in Action 1407, the wireless device 120 updates the cluster counter by one for that cluster. Optionally, in Action 1408, the wireless device 120 may update cluster centroid location and cluster axes. The means may be updated as follows: n=n+1 , d=x-m, and m -m+d/h. The covariance update is given above. If PCA is performed it may be recomputed based on the updated covariance matrices when a current covariance matrix is sufficiently different compared to when it was used to compute the PCA. In Action 1409, the wireless device 120 determines whether or not it is time to transmit the compressed data (i.e. first information) to the network node 1 10.); or compressing the first information according to an association relationship between at least two pieces of first information (pages 5, lines 24-25, The wireless device successively creates compressed data by associating each collected data sample to a cluster).
As to claim 20, Tullberg discloses a non-transitory readable storage medium (fig. 3, wireless device 120 with memory 307), wherein the non-transitory readable storage medium stores a program or an instruction, and wherein the program or instruction, when executed by the processor, causes the processor to perform (page 40, lines 4-20, software…stored in…UE…and executable by the processing circuitry): compressing first information in a preset compression manner (page 16, lines 5-25, wireless device 120 successively creates compressed data (i.e. compressing first information)…The actions performed by the wireless device 120 to create the compressed data will now be described (i.e. preset compression manner)), and reporting compressed first information (page 18, lines 25-35, wireless device 120 transmits, to the network node 110, the compressed data), wherein the first information comprises (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data) at least one of the following: first target information (page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); representation information of the first target information (page 21, lines 8-20, The wireless device 120 is configured to successively create compressed data (i.e. target information) by being configured to perform one or more of the following actions. The wireless device 120 is configured to associate each collected data sample to a cluster. The cluster has a cluster centroid, a cluster counter representative (i.e. representation information) of a number of collected data samples determined to be normal and being associated with the cluster, and a number of outlier collected data samples associated with the cluster); measurement information corresponding to the first target information (page 15, lines 25-31, wireless device may collect the number of successive data samples by performing one or more measurements; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to measurement)); neural network use information corresponding to the first target information (page 17, lines 13-14, neural networks; page 24, lines 15-35, adjusting…for one or more of the artificial neurons); information indicating whether the first target information meets a first condition (page 16, lines 25-35, wireless device 120 successively creates the compressed data by…when a number of clusters has a reached a maximum number (i.e. condition)…merges one or more of the clusters into a merge cluster; the cluster as the compressed data) at least one of the following: first target information (page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); and value information corresponding to the first target information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data, i.e. all are values; page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)), wherein the first target information comprises at least one of the following: position information of the terminal (page 5, lines 5-10, user is stationary for a while and then moves, there may be many inputs from a first cluster first, and then as the user moves to another location, from another cluster, and so on. This affects how to merge and split clusters, i.e. cluster pertains to position; page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data; page 24, lines 15-33, network node 100 may…train the machine learning model (i.e. target) based on received compressed data (i.e. target information)); first measurement quantity information (page 15, lines 25-31, wireless device may collect the number (i.e. quantity) of successive data samples by performing one or more measurements; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to measurement)); first event information (page 15, lines 31-35, wireless device 120 may be triggered to collect the number of successive data samples by a communications event; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to event)); first identifier information (pages 29, 21-25, The transmitted compressed data (i.e. target information) comprises…a list of outliers/anomalies (i.e. identifiers)); first transmission parameter information (page 15, lines 13-22, The wireless device 120 collects a number of successive data samples for training of the machine learning model comprised in the network node 1 10. The data samples may for example be sensor readings, such as temperature reading, or communication parameters, such as parameters of a communication link between the wireless device 120 and the network node 1 10. Some examples of such parameters are load, signal strength, signal quality, just to give some example. It should be understood that embodiments herein are not limited to compressing communication-related data but may be used for any kind of data. Examples of communication data may be beams, modulation and coding schemes, log-likelihood ratios which may computed when knowing the MCS and SNR before doing the channel decoding, and precoder matrix indices, just to mention some examples; page 16, lines 5-35, creates compressed data (i.e. target information) by…collected data samples (i.e. samples pertain to parameters)) ; application layer configuration information (page 2, lines 10-13, machine learning…used in many different communication applications; page 24, lines 15-33, network node 100 may…train (i.e. configure) the machine learning model based on received compressed data (i.e. target information)); and first configuration information (page 18, lines 14-17, cluster centroid, the cluster counter, and the number of outlier collected data samples associated with the cluster as the compressed data, i.e. all are configuration information to configure the machine learning; page 24, lines 15-33, network node 100 may…train the machine learning model based on received compressed data).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over WO 2020/139179 A1 to TULLBERG et al. (“Tullberg”) in view of U.S. Publication No. 2018/0206137 A1 to RYU et al. (“Ryu”).
As to claim 4, see similar rejection to claim 18. The apparatus teaches the method.
As to claim 18, Tullberg does not expressly disclose the terminal according to claim 15, wherein a report granularity of the first information comprises at least one of the following: each terminal; each cell; each frequency layer; each BWP; each transmit antenna port; and each receive antenna port.
Ryu discloses every UE (that is, every UE which has received the cell granularity/level reporting configuration) may perform cell granularity/level reporting, regardless of reception of an MBMS (para. 0456).
Prior to the effective filing date of invention, it would have been obvious to a
person of ordinary skill in the art to incorporate the reporting of Ryu into the invention
of Tullberg. The suggestion/motivation would have been to recognize a location of a UE in idle state (Ryu, para. 0001). Including the reporting of Ryu into the invention
of Tullberg was within the ordinary ability of one of ordinary skill in the art based on the teachings of Ryu.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OMAR J GHOWRWAL whose telephone number is (571)270-5691. The examiner can normally be reached M-F 9:00am-6:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ASAD NAWAZ can be reached at 571-272-3988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OMAR J GHOWRWAL/ Primary Examiner, Art Unit 2463