DETAILED ACTION
The Office Action is sent in response to Applicant’s Communication received on 07/01/2022 for application number 17/856,011. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawings, Abstract, Oath/declaration, IDS, Claims and Certified Copy of Foreign Priority Application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 objected to because of the following informalities:
For claim 1, on p.30, ll.4, “plurality of edge devices” should read as “a plurality of edge devices”.
For claim 1, on p.30, ll.8, “into encoding matrices” should read as “into encoded matrices”.
For claim 6, on p.31, ll.24, “plurality of edge devices” should read as “a plurality of edge devices”.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hong et al. (From IDS filed on 07/01/2022: “Chebyshev Polynomial Codes: Task Entanglement-based Coding for Distributed Matrix Multiplication”), hereinafter Hong1.
Regarding claim 1, Hong1 discloses:
A method for a main server to perform distributed matrix computation using plurality of edge devices [See figure 1], the method comprising:
a division operation of dividing first and second matrices to be computed into m first partial matrices and n second partial matrices, respectively [“In our scheme, the master divides input matrices A and B into sub-matrices of equal size
A
w
∈
Ϝ
a
b
*
b
f
o
r
w
∈
1
:
m
,
a
n
d
B
z
∈
Ϝ
b
*
c
n
f
o
r
z
∈
1
:
n
“ Sec.3; Sec.4.2, step 1];
an encoding operation of encoding the m first partial matrices and the n second partial matrices into encoding matrices for each edge device on the basis of task entanglement-based coding employing a Chebyshev polynomial [“Generate encoding functions
p
A
x
,
p
B
x
as in (2), based on the divided sub-matrices of A, B in (1) and
L
1
,
L
2
degree Chebyshev polynomials f(x), g(x).” Sec.4.2, step 3];
a transmission operation of transmitting the encoded matrices for each edge device to the corresponding edge device; a reception operation of receiving matrix computation task results from the edge devices; [“Size of the transmitted data between the master and workers. It includes encoded matrices from the master to workers (task-allocation communication load)2 and computation results returned from workers to the master (computation-return communication load)” Sec.2, discloses transmitting data and reception] and
a decoding operation of, when a number of received matrix computation task is results becomes a first recovery threshold, decoding the received matrix computation task results to recover a computation result of the first matrix and the second matrix[“Recovery Threshold: Minimum number of computation results
C
i
,
j
~
that the master requires to obtain the final result C in the worst-case scenario” Sec.2; “decoding at the master can be done by inversion of the coefficient matrix in (10)” Sec.4.4; see also Sec.6].
Regarding claim 2, Hong1 discloses:
wherein the encoding operation comprises:
a first encoding operation of encoding the m first partial matrices into
L
2
encoding matrices for each edge device according to a determined number L (L =
L
1
L
2
,
L
1
a
n
d
L
2
are coprime) of tasks on the basis of task entanglement-based coding employing a first Chebyshev polynomial with an order of
L
1
; and a second encoding operation of encoding the n second partial matrices into
L
1
encoding matrices for each edge device on the basis of task entanglement-based coding employing a second Chebyshev polynomial with an order of
L
2
.
[“Generate encoded matrices
A
-
i
,
j
a
n
d
B
-
i
,
j
f
o
r
j
∈
1
:
L
2
a
n
d
k
∈
1
:
L
1
f
o
r
i
∈
1
:
W
” Sec.4.2, Step 5; “each worker
W
i
starts to compute L tasks” Sec.4.3; “
L
1
a
n
d
L
2
…
a
r
e
t
h
e
d
e
g
r
e
e
s
o
f
f
x
a
n
d
g
(
x
)
…
A
w
f
x
w
-
1
,
w
∈
1
:
m
and
B
z
g
x
z
-
1
,
z
∈
1
:
n
… For example, if we set m = n, this constraint is satisfied by choosing a prime number for
L
2
=
m
and
L
1
smaller than
L
2
” Sec.4.2, Step 2, discloses the determination of
L
1
a
n
d
L
2
and teaches a prime number for
L
2
=
m
and
L
1
<
L
2
which implies co-prime for positive numbers, since a prime number has only 1 and itself as it’s factors and all positive numbers less than a prime number will only have a common divisor of 1, additionally it would’ve been obvious to one of ordinary skill in the art that
L
1
a
n
d
L
2
are used to determining number of tasks which would be positive.].
Regarding claim 3, Hong1 teaches:
Wherein the encoding operation further comprises an evaluation point selection operation of selecting
L
2
evaluation points to be used in the first encoding operation and
L
1
evaluation points to be used in the second encoding operation [See Sec.4.1 and Algorithm 1]
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 1 is also rejected under 35 U.S.C. 103 as being unpatentable over Hong et al. (NPL: “Squeezed Polynomial Codes: Communication-efficient Coded Computation in Straggler-Exploiting Distributed Matrix Multiplication”), hereinafter Hong2, and in view of Fahim et al. (NPL: “Numerically Stable Polynomially Coded Computing”), hereinafter Fahim.
Regarding claim 1, Hong2 discloses:
A method for a main server to perform distributed matrix computation using plurality of edge devices [See figure 1], the method comprising:
a division operation of dividing first and second matrices to be computed into m first partial matrices and n second partial matrices, respectively [“a master divides input matrices A and B into sub-matrices of equal size
A
w
,
x
T
∈
Ϝ
a
m
*
b
p
f
o
r
w
∈
1
:
m
,
a
n
d
x
∈
1
:
p
,
a
n
d
B
y
,
z
∈
Ϝ
b
p
*
c
n
f
o
r
y
∈
1
:
p
a
n
d
z
∈
1
:
n
“ Sec.II];
an encoding operation of encoding the m first partial matrices and the n second partial matrices into encoding matrices for each edge device based on task entanglement [“
A
~
i
,
j
∈
Ϝ
a
m
*
b
p
f
,
B
~
i
,
j
∈
Ϝ
b
p
*
c
n
are encoded from A and B by the encoding functions at the master”, Sec.II; “minimize task-allocation communication load by applying squeezed polynomial codes” Sec.III];
a transmission operation of transmitting the encoded matrices for each edge device to the corresponding edge device; a reception operation of receiving matrix computation task results from the edge devices; [Figures 1 and 2, shows sending the encoded data to the workers and receiving data results back] and
a decoding operation of, when a number of received matrix computation task is results becomes a first recovery threshold, decoding the received matrix computation task results to recover a computation result of the first matrix and the second matrix [Figure 1, shows decoding received encoding matrices; "Recovery threshold R: The minimum number of the computation results of the sub-tasks... that the master needs to wait for to obtain the final product C" Sec.II].
However, Hong2 does not explicitly disclose an encoding operation of encoding the m first partial matrices and the n second partial matrices into encoding matrices for each edge device on the basis of task entanglement-based coding employing a Chebyshev polynomial.
In the analogous art of Coded matrix multiplication in distributed systems, Fahim teaches a encoding matrices for each edge device employing a Chebyshev polynomial ["The use of Chebyshev polynomials leads to encoding matrices" Sec.I; See IV. CHEBYSHEV POLYNOMIALS BASED CODES, wherein Encoding Functions Pa and Pb are modified to use Chebyshev Polynomials for encoding].
It would have been obvious to one of ordinary skill in the art, having the teachings of Hong2 and Fahim before him before the effective filing date of the claimed invention to modify the encoding method of Hong2, in order to implement a numerically stable polynomial code for encoding data as taught by Fahim, that offers better communication and computation cost [Fahim: Sec.I and V]
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (NPL: "Fully Private Coded Matrix Multiplication From Colluding Workers"), hereinafter Kim1, in view of Kim et al. (NPL: “Private Coded Matrix Multiplication”), hereinafter Kim2, and further in view of Hong2, and further in view of Fahim.
Regarding claim 6, Kim1 discloses:
A method of performing distributed matrix computation using a main server and plurality of edge devices in a distributed computing environment in which the plurality of edge devices have a first matrix dataset and a second matrix dataset to be computed [Figure 1], the method comprising:
a one-hot encoding operation in which the main server performs one-hot encoding on indices in the corresponding datasets of a first matrix and a second matrix to be s computed [“while concealing the index of the desired matrix in the library against the workers” Sec.I; “a master wants to obtain a matrix multiplication… As a part of the queries
Q
A
,
i
and
Q
B
,
i
the vectors or indicating the two matrices
A
d
1
and
B
d
2
from each library are denoted as
θ
A
… and
θ
B
…” Sec.III.A. “where
θ
A
and
θ
B
denote the indicating vectors for the desired matrices
A
1
and
B
3
, thus implying that
θ
A
= [1 0 0] and
θ
B
= [0 0 1]” Sec.III.B, shows one-hot coding for task selection to utilize matrix multiplication];
a transmission operation in which the main server transmits the matrices encoded is for each edge device to the corresponding edge device [Figure 1, shows transmitting the queries
Q
A
,
i
and
Q
B
,
i
for matrices to be used];
a first matrix encoding operation in which the edge devices multiply all matrices of the first matrix dataset by the first encoding matrix to encode the first matrix; a second matrix encoding operation in which the edge devices multiply all matrices of the second matrix dataset by the second encoding matrix to encode the second matrix; ["As a part of the queries
Q
A
,
i
and
Q
B
,
i
the vectors for indicating the two matrices
A
d
1
and
B
d
2
… such that
A
*
θ
A
=
A
d
1
and
B
*
θ
B
=
B
d
2
" Sec.III.A]
a matrix computation operation in which the edge devices perform a matrix computation task on the encoded first matrix and the encoded second matrix ["After encoding the libraries,
W