DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/12/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (Learning based 2D Irregular Shape Packing, arxiv.org, arXiv:2309.10329v1 [cs.GR] 19 Sep 2023, hereinafter “Yang”) in view of Onzon (US 20190172182 A1, hereinafter, “Onzon”).
Regarding claim 1, Yang teaches A method of processing a plurality of UV patches of a three-dimensional model, the method comprising: (Abstract, “Our method iteratively selects and groups subsets of UV patches into near-rectangular super patches, essentially reducing the problem to bin-packing, based on which a joint optimization is employed to further improve the packing ratio”)
dividing the plurality of UV patches into a primary set of the UV patches and a secondary set of the UV patches based on a comparison between a size of each of the plurality of UV patches and patch size threshold, (page 3, section 3, “Given the set of UV patches, we filter out the tiny patches and firstly process the non-tiny ones in two stages”; page 13, Algorithm 3: “
PNG
media_image1.png
178
392
media_image1.png
Greyscale
”; Fig. 2: “Input Patches” are divided into two sets of patches (i.e., “Non-tiny” and “Tiny”). Note that: (1) S/ is the “Non-tiny” set of input UV patches S, while accordingly S - S/ is “Tiny” set of input UV patches S based the size threshold
PNG
media_image2.png
20
34
media_image2.png
Greyscale
; (2) the “Non-tiny” set can be taken as a primary set of UV patches while the “Tiny” can be regarded as a secondary set of UV patches; and (3) when “Input Patches” are divided into two sets of patches, it is obvious to one having ordinary skills in the art that the size of each input patch is compared to the threshold to determine which set it belongs to.
generating a plurality of subsets by applying a high-level group selector network (HSN) on the UV patches of the primary set, at least one of the plurality of subsets being obtained by iteratively applying the HSN on the UV patches of the primary set, a total number of UV patches in each of the plurality of subsets being less than or equal to 4; (page 3, Fig. 2: the loop from “Low-level Packing Policy” to “High-level Group Selector” indicates that HSB is iteratively applied; page 3, col. left, para. 4, “We first employ a sampling-based group selector network (HSN Sec. 4.3) to identify a potential subset S′of at most 𝐻 patches that can form a nearly rectangular super-patch”; page 7, col. left, para. 3, “we compare the packing ratio of LL under different horizons 𝐻. We train four low-level algorithms with 𝐻 = 2, ・ ・ ・ , 5 and our resulting packing ratios over the 2500 random problems are show in Tab. 3. Our approach performs the worst when 𝐻 = 2, where our low-level policies become myopic, but the quality varies only slightly when 𝐻 ≥ 2. Therefore, we choose 𝐻 = 4 for the highest quality.”).
PNG
media_image3.png
342
980
media_image3.png
Greyscale
Note that: (1) HSN is applied to the “No-tiny” UV patches (the UV patches of the primary set); (2) a plurality of subsets are generated by HSN as its output during the iterative loop processing in Fig. 2; (3) “at most 𝐻 patches” indicates that the number of a total number of UV patches in each of the plurality of subsets is less or equal to H; and (4) for the highest quality H can be selected as H = 4 by Yang.
Grouping the plurality of subsets of the UV patches of the primary set into a plurality of super-patches, each of the plurality of super-patches including different UV patches in the primary set that are packed together in a predefined shape; (page 3, section 3“Our first stage aims to turn the non-tiny patches into nearly rectangular super-patches (Sec. 4)”). Note that: (1) HSN generate a plurality of subsets of the UV patches of the primary set above; and (2) the plurality of subsets of the UV patches of the primary set are grouped or clustered into super-patches, and packed into a nearly rectangular shape as a predefined shape.
assembling the plurality of super-patches together into a first bounding box; adjusting poses of the plurality of super-patches to reduce spacing between the plurality of super-patches; and (page 3, section 3, “Our second stage (Sec. 4.4) assembles all the patches using a heuristic bin-packing algorithm. A joint local optimization is then performed to squeeze all the patches as tightly as possible”; section 4.4, “take the rectangular shape assumption and use the divide-and-conquer algorithm implemented in the Trimesh library [Dawson-Haggerty et al. 2019] to assemble all the super-patches together. With the initially packed result from bin-packing, we then propose to adjust the joint poses of all the patches, locally squeezing them together via numerical optimization”). Note that: squeezing all the patches as tightly as possible is equivalent to reducing spacing between the plurality of super-patches.
filling gaps between the UV patches in the primary set with the UV patches in the secondary set. (page 6, section 4.4, “For these small patches, we adopt a conventional approach to sort them in an area-descending order and then use the scanline algorithm [Hu et al. 2018] to fit them into the gaps and holes of the super-patches. During this stage, we also replace alpha shapes with the original patches to expose the scanline algorithm to potentially useful gaps and holes. These final adjustments are illustrated in Fig. 6”).
the HSN is trained based on a constraint that when a first ground truth packing ratio of a first training subset of the plurality of subsets is less than a second ground truth packing ratio of a second training subset of the plurality of subsets, a first predicted packing ratio determined through the HSN based on the first training subset is less than a second predicted packing ratio determined through the HSN based on the second training subset. (Yang, page 5, col. right, para. 3, “In view of this, we train our HSN via supervised metric learning [Chopra et al. 2005]. During each learning iteration, we randomly sample two (at most) 𝐻-patch groups denoted as S’ and S’’ with ground truth packing ratios denoted as pr’ and pr’’ and HSN predicted packing ratio denoted as HSN(S’) and HSN(S’’). We then update HSN via the following margin ranking loss:
PNG
media_image4.png
68
474
media_image4.png
Greyscale
”). Note that: (1) HSN is trained with two training subsets of the plurality of subsets; (2) the training subsets have the ground truth packing ratios denoted as pr’ and pr’’ and HSN predicted packing ratio denoted as HSN(S’) and HSN(S’’); (3) the training subset with smaller ground truth packing ratio (e.g., pr’) than that of the other training subset with the other ground truth packing ratio (e.g., pr’’) can be regarded as a first training subset (e.g., S') of the plurality of subsets while the other training subset (S’’) can be regarded as a second training subset of the plurality of subsets accordingly; and (4) the margin ranking loss dictates a condition: when pr’ < pr’’ and HSN(S’) < HSN(S’’) at the same time, margin ranking loss is “0” (=max(0, -
PNG
media_image5.png
14
10
media_image5.png
Greyscale
)). However, when pr’ < pr’’ and HSN(S’) > HSN(S’’), margin ranking loss is a positive value (=
PNG
media_image6.png
18
188
media_image6.png
Greyscale
). In this way, during the training of HSN, the ranking loss is used to apply the condition to gradually to reduce or minimize the ranking loss to meet the goal of corresponding ranking learning or ordinal learning so that “when pr’ < pr’’, HSN(S’) < HSN(S’’)”.
Yang fails to disclose, but in the same field of machine learning, Onzon discloses … the patch size threshold being less than or equal to 5 pixels; … (Onzon, para. [0131], “Note that in this version we not only allow patches of odd size (like 3 by 3 pixels) but also of even size (like 2 by 2 pixels)”). Note that: patch size of 2x2 pixels can be used as the patch size threshold value that is 4 and less than 5.
Yang and Onzon are in the same field of endeavor, namely image information processing. Before the effective filing date of the claimed invention, it would have been obvious to apply having a patch size as predefined, as taught by Onzon into Yang. The motivation would have been “a predetermined pixel patch shape and a predetermined pixel patch size” (Onzon, para. [0126]). The suggestion for doing so would allow to have a predefined patch size as the patch size threshold. Therefore, it would have been obvious to combine Yang with Onzon.
Regarding claim 2, Yang in view of Onzon discloses The method of claim 1, wherein the grouping further comprises:
based on a HSN configured to identify a subset from the primary set to form a super-patch of the plurality of super-patches, (page 3, Fig. 2: “High-level Group Selector” consists of a “HSN” to identify a subset (the 4-item dashed-lined block immediately on the right side of the HSN block, the subset is used to further form a super-pixel; section 3, “We first employ a sampling-based group selector network (HSN Sec. 4.3) to identify a potential subset S' of at most 𝐻 patches that can form a nearly rectangular super-patch”).
selecting M subsets from the primary set, each of the M subsets including respective N UV patches, the M being a first positive integer, the N being a second positive integer smaller than M; (page 5, section 4.2, Algorithm 1, “
PNG
media_image7.png
216
560
media_image7.png
Greyscale
”; page 3, Fig. 2: “High-level Group Selector” consists of a “HSN” to identify a subset (the 4-item dashed-lined block immediately on the right side of the HSN block, the subset is used to further form a super-pixel). Note that: the number of the randomly sampled subsets (e.g., 400) can be regarded as M (M=400) being a first positive integer, while the number of patches in each subset (e.g., 4) can be regarded as N (N=4) as a second positive integer smaller than the first positive integer M (M=400).
determining an estimated area-averaged packing ratio associated with each of the M subsets; (page 5, section 4.3, “Our low-level policies can only sort and pack a small subset S’ of 𝐻 patches. In order to solve practical UV packing problems with hundreds of patches, we need to iteratively select S’ from S. To this end, we design a weighted average packing ratio pr(•) to evaluate the quality of the updated configuration pr(S − S′ ∪ LL(S′)) and pick S′ corresponding to the highest ratio. We then train our HSN to rank the quality of S′ without actually calling the costly low-level function LL(S′). Finally, we propose a sampling strategy to further reduce the calls to HSN”;
section 4.3.1, “
PNG
media_image8.png
186
564
media_image8.png
Greyscale
”). Note that: By using the formula for area-averaged packing ratio above, an estimated area-averaged packing ratio associated with each of the M subsets can be determined.
reordering the M subsets based on the estimated area-averaged packing ratios; and determining L subsets from the M subsets that correspond to L largest estimated area- averaged packing ratios. (page 6, section 4.3.3, “we randomly sample 400 subsets of patches and predict their packing ratio via a batched HSN evaluation. The top 10 out of 400 best groups are then forwarded to the low-level algorithm to evaluate the ground truth packing ratio pr(S – S' ∪ LL(S’)), and finally the best of the 10 groups is adopted to form the next super-patch. As outlined in Alg. 1 of the supplementary, this procedure is repeated until the updated packing ratio pr(S) is not higher than the current value”). Note that: (1) for each subset in S’, “a weighted average packing ratio pr(•) to evaluate the quality of the updated configuration pr(S – S’∪ LL(S’))” is used; (2) the subsets in S’ is sorted in descending order based on the estimated area-averaged packing ratios (see page 5, section 4.2, “Algorithm 1” of Yang); and (3) the best of the 10 groups is sorted in descending order (see page 5, section 4.2, “Algorithm 1” of Yang) as corresponding to 10 largest estimated area-averaged packing ratios, and the number of groups (10 in “Algorithm 1” of Yang) can be regarded as L (L=10).
Regarding claim 3, Yang in view of Onzon discloses The method of claim 2, wherein the grouping further comprises:
packing the UV patches in each of the L subsets together into a respective subset bounding box; (Yang, pages 3-4, section 4.1, “we propose to model the packing procedure as a Markov Decision Process (MDP) and train LPN to maximize the packing ratio via reinforcement learning”; page 4, section 4.1.1, “we assume LPN can observe the current packing patch 𝑃𝑖−1 and a set of at most 𝐻 future patches to be packed, i.e., 𝑠𝑖 ≜ (𝑃𝑖−1, 𝑝𝑖 , ・ ・ ・ , 𝑝𝑖+𝐻−1). This is because future patches can affect the pose of the current patch in order to achieve joint optimality”; page 4, section 4.1.3, “Finally, we update the next state as 𝑠𝑖+1 ≜ ( [𝑅(𝜃★𝑖 )𝑝𝑖 + 𝑡★𝑖 ] ∪ 𝑃𝑖−1, 𝑝𝑖+1, ・ ・ ・ , 𝑝𝑖+𝐻). Note that we use the distance between center-of-mass as a surrogate measure for the packing ratio. We choose not to use the packing ratio as our objective function, because the new patch 𝑝𝑖 can oftentimes be entirely contained in the bounding box of 𝑃𝑖−1 and all the poses of 𝑝𝑖 inside the bounding box has the same packing ratio”). Note that: for each of L(L=10) subset, it is obvious to one having ordinary skills in the art that the UV patches are packed into the corresponding bounding box.
determining an area-averaged packing ratio associated with each of the packed L subsets; and (page 5, section 4.3.1, “
PNG
media_image8.png
186
564
media_image8.png
Greyscale
”). Note that: an area-averaged packing ratio associated with each of the packed L subsets can be calculated using the formulars just above.
determining a first super-patch of the plurality of super-patches from the packed L subsets, the first super-patch corresponding to a largest area-averaged packing ratio among the determined area-averaged packing ratios of the packed L subsets. Note that: it is obvious to one having ordinary skills in the art that: (1) the area-averaged packing ratios of the L subsets can be sorted in descending order; (2) the subset with the largest area-averaged packing ratio of the L subsets can be regarded as a first super-patch of the plurality of super-patches from the packed L subsets.
Regarding claim 4, Yang in view of Onzon discloses The method of claim 3, wherein:
the packing the UV patches in each of the L subsets together into the respective subset bounding box further includes packing the UV patches in a first subset of the L subsets into a first subset bounding box, and
the packing the UV patches in the first subset of the L subsets into the first subset bounding box further includes: Note that: (1) the UV patches in each of L subsets is packed in its corresponding bounding box; and (2) it is obvious to one having ordinary skills in the art that one of L subsets and its corresponding bounding box can be regarded as a first subset of L subsets and a first subset bounding box.
based on a low-level sorter network (LSN) that is configured to determine a packing order of the UV patches in the first subset, (Yang, page 5, section 4.2, “Our LSN provides the optimal patch ordering for the LPN to achieve the best packing ratio, as our LPN can only pack the patches in a given order”).
organizing the UV patches in the first subset into a connected graph in which already-packed patches and to-be-packed patches of the first subset are connected to each other; (Yang, page 5, section 4.2, “we apply a Graph Attention Network (GAT) module [Velickovic et al. 2017], which is known to be effective in solving sorting tasks [Hu et al. 2020; Zhao et al. 2022]. We organize all the patches into a fully connected graph, where the nodal input of GAT is the patch’s feature
PNG
media_image9.png
24
20
media_image9.png
Greyscale
, along with the feature of the already packed super-patch
PNG
media_image10.png
26
34
media_image10.png
Greyscale
”). Note that: (1) here all the patches are equivalent to the UV patches in the first subset; and (2) the UV patches in the first subset, including already-packed patches and to-be-packed patches are connected to each other through a fully connected graph.
inputting node features of the UV patches in the first subset into a graph attention network (GAT) to obtain graph features of the UV patches of the first subset; (Yang, page 5, section 4.2, “we apply a Graph Attention Network (GAT) module [Velickovic et al. 2017], which is known to be effective in solving sorting tasks [Hu et al. 2020; Zhao et al. 2022]. We organize all the patches into a fully connected graph, where the nodal input of GAT is the patch’s feature
PNG
media_image9.png
24
20
media_image9.png
Greyscale
, along with the feature of the already packed super-patch
PNG
media_image10.png
26
34
media_image10.png
Greyscale
”). Note that: the patch’s feature
PNG
media_image9.png
24
20
media_image9.png
Greyscale
, along with the feature of the already packed super-patch
PNG
media_image10.png
26
34
media_image10.png
Greyscale
as the nodal inputs inputting node features.
converting the graph features to corresponding Q-values via a multilayer perceptron (MLP) that includes an input layer and an output layer, and one or more hidden layers with stacked neurons; and (Yang, page 5, section 4.2, “Then the patch features are converted to their corresponding Q-values using an MLP network. This architecture parameterizes our sorting policy, denoted as 𝜋LSN. Similar to 𝜋LPN, 𝜋LSN is trained using DDQN via randomly sampled batches of at most 𝐻 patches coming from the same 3D model”), Note that: It is obvious to one having ordinary skills in the art that the MLP has input layer / output layer / hidden layers with stacked neurons.
determining a first one of the to-be-packed patches to pack to the already-packed patches based on the Q-values. (Yang, page 5, section 4.2, “We represent LSN as another policy function 𝑎’𝑖 = 𝜋LSN(𝑠𝑖 ) that selects the next patch to be packed, i.e., 𝑎’𝑖 consists of the Q-value of the 𝑘 future patches … GAT outputs the high-level graph feature for each of the existing patches”). Note that: the Q-values of the corresponding reinforce learning network guide the determination of a first one or the selection of the next patch from future patches to be packed to the already-packed patches during the network state updating.
Regarding claim 5, Yang in view of Onzon discloses The method of claim 4, wherein the node features of the UV patches of the first subset are determined based on a fully convolutional network (FCN) in which the UV patches of the first subset are encoded into a F-dimensional latent space, the F being a positive integer. (Yang, page 4, section 4.1.1, “Practically, each patch can be of arbitrary geometric shapes, so we rasterize each patch in 𝑠𝑖 (including 𝑃𝑖−1) to a 50 × 50 2D image. For each patch, we move their center-of-mass (COM) to the image center and encode each patch using a shared Fully Convolutional Network (FCN) into a 432-dimensional latent code 𝑝𝑖 ≜ FCN(𝑝𝑖), which is concatenated as 𝑠𝑖 = FCN(𝑠𝑖 ) for short”).
Regarding claim 6, Yang in view of Onzon discloses The method of claim 4, where the packing the UV patches in the first subset of the L subsets further comprises:
based on a low-level pose network (LPN) that is configured to determine a pose of a UV patch in the primary set, (Yang, page 3, section 4.1, “Given 𝑝𝑖 and 𝑃𝑖−1, the low-level packing algorithm needs to select the translation 𝑡𝑖 and rotation 𝜃𝑖 for 𝑝𝑖 such that the packed shape 𝑃𝑖 ≜ [𝑅(𝜃𝑖 )𝑝𝑖 + 𝑡𝑖 ] ∪ 𝑃𝑖−1 is collision-free with a high packing ratio”).
determining a state space, the state space indicating position states of the already-packed patches and the to-be-packed patches in the first subset;
determining an action space for the to-be-packed patches, the action space indicating candidate packing actions for the to-be-packed patches; (Yang, page 4, section 4.1, “the MDP is identified with a tuple < 𝑆,𝐴, 𝜏, 𝑟 >, which models the procedure of a decision-maker iteratively observing the current system state in the state space 𝑆 and taking an action in the action space 𝐴 to change the environment”; page 4, section 4.1.1, “State Space 𝑆. During the 𝑖-th iteration, the LPN observes the current system
state 𝑠𝑖 in the state space 𝑆. In our problem, we assume LPN can observe the current packing patch 𝑃𝑖−1 and a set of at most 𝐻 future patches to be packed, i.e., 𝑠𝑖 ≜ (𝑃𝑖−1, 𝑝𝑖 , · · · , 𝑝𝑖+𝐻−1). This is because future patches can affect the pose of the current patch in order to achieve joint optimality”). Note that: (1) when packing the current 𝑃𝑖−1 in the first subset, the future patches can be regarded as the to-be-packed patches; and (2) Action space and state space are common to a reinforce network and its iteration.
applying each of the candidate packing actions in the action space on the first one of the to-be-packed patches; (Yang, page 4, section 4.1.2, “Action Space 𝐴. Having observed 𝑠𝑖, our LPN can be represented as a policy function 𝑎𝑖 = 𝜋LPN(𝑠𝑖 ) that maps the state to an action 𝑎𝑖 in the action space 𝐴”).
determining an updated state space corresponding to each of the candidate packing actions that is applied on the first one of the to-be-packed patches; (Yang, page 4, section 4.1.3, “Our state transition function 𝑠𝑖+1 = 𝜏 (𝑠𝑖 , 𝑎𝑖 ) computes the next state 𝑠𝑖+1 from 𝑠𝑖 and 𝑎𝑖 by converting the action 𝜃𝑖 and 𝜙𝑖 into a collision-free, tightly packed pose … we update the next state as 𝑠𝑖+1 ≜ ( [𝑅(𝜃★𝑖 )𝑝𝑖 + 𝑡★𝑖 ] ∪ 𝑃𝑖−1, 𝑝𝑖+1, ・ ・ ・ , 𝑝𝑖+𝐻 )”). Note that: (1) the state function determines an state space updating method, or a state space can be updated or transited accordingly; (2) each of candidate packing action is associated with the state space, can be applied to the patch of the future patches (𝑝𝑖, 𝑝𝑖+1, ・ ・ ・ , 𝑝𝑖+𝐻; the to-be-packed patches).
determining a reward value corresponding to each of the updated state spaces associated with the first one of the to-be-packed patches, each of the reward values corresponding to an area-average packing ratio associated with the respective candidate packing action; (Yang, page 4, section 4.1.4, After each state transition, the policy receives a sparse reward signal defined as: 𝑟 (𝑠𝑖 , 𝑎𝑖 , 𝑠𝑖+1) = pr(𝑃𝑖 )I[𝑖 = 𝐻], where pr(𝑃) is the packing ratio of super-patch 𝑃 and I[𝑖 = 𝐻] is an indicator function of the last iteration”). Note that: (1) reward value 𝑟 (𝑠𝑖 , 𝑎𝑖 , 𝑠𝑖+1) is corresponding to the state 𝑠𝑖 and 𝑠𝑖+1 of the updated state space, and associated with the first one of the to-be-packed patches (𝑃𝑖); (2) each of reward values 𝑟 (𝑠𝑖 , 𝑎𝑖 , 𝑠𝑖+1) is associated with the packing ratio (pr(𝑃𝑖)) of super-patch 𝑃; and (3) 𝑟 (𝑠𝑖 , 𝑎𝑖 , 𝑠𝑖+1) is also associated with the respective candidate packing action 𝑎𝑖.
determining a packing action from the candidate packing actions in the action space that corresponds to a largest reward value of the reward values; and
packing the first one of the to-be-packed patches by adjusting a pose and a distance of the first one of the to-be-packed patches according to the determined packing action. (Yang, page 5, section 4.1.4, “
PNG
media_image11.png
248
562
media_image11.png
Greyscale
”; page 4, section 4.1.3, “Our state transition function 𝑠𝑖+1 = 𝜏 (𝑠𝑖 , 𝑎𝑖 ) computes the next state 𝑠𝑖+1 from 𝑠𝑖 and 𝑎𝑖 by converting the action 𝜃𝑖 and 𝜙𝑖 into a collision-free, tightly packed pose … we update the next state as 𝑠𝑖+1 ≜ ( [𝑅(𝜃★𝑖 )𝑝𝑖 + 𝑡★𝑖 ] ∪ 𝑃𝑖−1, 𝑝𝑖+1, ・ ・ ・ , 𝑝𝑖+𝐻 )”). Note that: (1) solving the stochastic optimization is equivalent to determining a packing action from the candidate packing actions in the action space that corresponds to a largest reward value of the reward values; (2) Updating the state space is equivalent to packing the first one of the to-be-packed patches (the future patches; 𝑝𝑖, 𝑝𝑖+1, ・ ・ ・, 𝑝𝑖+𝐻) by adjusting 𝜃★𝑖 , 𝑡★I as a pose and a distance.
Regarding claim 7, Yang in view of Onzon discloses The method of claim 6, wherein:
the determined packing action includes a translation action to reduce the spacing between the first one of the to-be-packed patches and the already-packed patches and a rotation action to adjust a pose of the first one of the to-be-packed patches, and
the determined packing action is applied to the first one of the to-be-packed patches according to a collision-constrained local optimization such that a center-of-mass (COM) of the first one of the to-be-packed patches and a COM of the already-packed patches is reduced to a predetermined value and the first one of the to-be-packed patches and the already-packed patches are not overlapped. (Yang, page 4, section 4.1.2, “We tackle these two problems by having the policy 𝜋LPN to select a rough initial guess and then use local optimization to revise the pose. Specifically, we re-parameterize the action space 𝐴 under polar coordinates (Fig. 3). We first compute the COMs for 𝑃𝑖−1 and 𝑝𝑖 , denoted as COM(𝑃𝑖−1) and COM(𝑝𝑖 ). The relative position COM(𝑝𝑖) with respect to COM(𝑃𝑖−1) is expressed under polar coordinates with relative angle 𝜙𝑖 and we ignore their relative distance. Similarly, the local-to-global rotation of 𝑝𝑖 is encoded as another angle 𝜃𝑖. we define our action space as: 𝑎𝑖 ≜ (𝜃𝑖 , 𝜙𝑖 ). At an early stage of this research, we parameterize our policy to use this continuous action space. However, in experiments we find the optimal state-action distribution can be multi-modal, and Gaussian distributions widely adopted to model the continuous action space cannot capture the multi-modal nature, leading to inferior performances. Therefore, we sample the range of 𝜃𝑖 and 𝜙𝑖 at 16 angles and consider the 16 × 16 = 256-dimensional discrete action space”; page 4, section 4.1.3, “we also use the state transition function to locally revise the action and improve the packing ratio. To this end, we devise the following collision-constrained local optimization: 𝜃★𝑖 , 𝑡★𝑖 ≜ argmin𝜃,𝑡 ∥𝑅(𝜃)COM(𝑝𝑖 ) + 𝑡 − COM(𝑃𝑖−1) ∥2 s. t. [𝑅(𝜃)𝑝𝑖 + 𝑡 ] ∩ 𝑃𝑖−1 = ∅, (1) where we initialize 𝜃 = 𝜃𝑖 and 𝑡𝑖 is initialized from 𝜙𝑖 by elongating the relative direction (red line in Fig. 3) between 𝑃𝑖 and 𝑝𝑖 until they are collision free”). Note that: (1) the determined packing action 𝑎𝑖 include a relative angle 𝜙𝑖 of the relative position COM(𝑝𝑖 ) with respect to COM(𝑃𝑖−1); and (2) the state transition function is used to locally revise the action including the translation 𝑡★𝑖.
Regarding claim 8, Yang in view of Onzon discloses The method of claim 4, wherein the determining the area-averaged packing ratio associated with each of the packed L subsets further comprises:
determining a packing ratio of the packed first subset of the packed L subsets based on a ratio of an area of the first subset and an area of the first bounding box of the packed first subset; and (Yang, page 5, section 4.3.1, “
PNG
media_image8.png
186
564
media_image8.png
Greyscale
”). Note that: (1) the calculation method of the area-averaged packing ratio can be applied to the packed first subset of the packed L subsets to determine a packing ratio of the packed first subset of the packed L subsets; and (2) by the definition of a packing ratio, it is obvious to one having ordinary skills in the art that a packing ratio is a ratio of two corresponding areas (i.e., an area of the first subset and an area of the first bounding box of the packed first subset,
PNG
media_image12.png
26
148
media_image12.png
Greyscale
and
PNG
media_image13.png
22
130
media_image13.png
Greyscale
).
determining the area-averaged packing ratio associated with the packed first subset of the packed L subsets based on a ratio of (i) a sum of areas of the UV patches in the super-patches that includes the packed first subset in the primary set and (ii) a sum of areas of subset bounding boxes of the super-patches that includes the packed first subset in the primary set. Note that: a sum of areas of the UV patches in the super-patches (“
PNG
media_image12.png
26
148
media_image12.png
Greyscale
”) that includes the packed first subset in the primary set and a sum of areas of subset bounding boxes of the super-patches (“
PNG
media_image13.png
22
130
media_image13.png
Greyscale
”) that include the packed first subset in the primary set can be calculated or determined accordingly when the formula above for rejection of this claim applied to the packed first subset of the packed L subsets.
Regarding claim 9, Yang in view of Onzon The method of claim 6, further comprising:
training the LPN based on a Q-learning algorithm by maximizing an expected cumulative reward value, wherein the expected cumulative reward value is defined as:
PNG
media_image14.png
38
192
media_image14.png
Greyscale
i indicating an i-th patch of the to-be-packed patches, ai indicating a determined packing action from the candidate packing actions that is applied to the i-th patch, r (si, ai, si+1) indicating a reward value in response to the determined packing action being applied to the i-th patch,
PNG
media_image15.png
24
36
media_image15.png
Greyscale
indicating a probability that the determined packing action ai is selected from the candidate packing actions, si indicating a state space before the i-th patch is packed to the already-packed patches, si+1 indicating a state space after the i-th patch is packed to the already-packed patches. (Yang, page 4, section 4.1.4, After each state transition, the policy receives a sparse reward signal defined as: 𝑟 (𝑠𝑖 , 𝑎𝑖 , 𝑠𝑖+1) = pr(𝑃𝑖 )I[𝑖 = 𝐻], where pr(𝑃) is the packing ratio of super-patch 𝑃 and I[𝑖 = 𝐻] is an indicator function of the last iteration. Note that using sparse reward signals can significantly slow down policy learning, but such reward does not pose a major problem in our application as we use a short horizon 𝐻, i.e. |𝐻| < 5. We train our LPN policy via Q-learning algorithm by maximizing the expected cumulative reward: argmax
PNG
media_image16.png
74
332
media_image16.png
Greyscale
.
To solve the stochastic optimization, we adopt the robust double deep Q-learning (DDQN) algorithm [Van Hasselt et al. 2016] and train 𝜋LPN to pack randomly sampled batches of at most 𝐻 patches of arbitrary order from our patch dataset, and we ensure each sampled batch comes from the same 3D model”).
Regarding claim 10, Yang in view of Onzon discloses The method of claim 2, further comprising:
training the HSN, and
the training the HSN further includes: (page 2, section 1, “we train a high-level group selector network (HSN) to efficiently predict how likely a candidate patch subset can be grouped into a rectangular super-patch”).
determining the first predicted packing ratio for a first training subset from the primary set and a second estimated area-averaged packing ratio for a second training subset from the primary set; (page 5, section 4.3.2, “During each learning iteration, we randomly sample two (at most) 𝐻-patch groups denoted as S’ and S’’ with ground truth packing ratios denoted as pr’ and pr’’ and HSN predicted packing ratio denoted as HSN(S’) and HSN(S’’)”). Note that: (1) each learning iteration is equivalent to each training step; (2) the sampled two H-patch groups (S’ and S’’) can be regarded as a first training subset from the primary and a second training subset from the primary set, respectively; (3) HSN(S’) and HSN(S’’) can be regarded as the first predicted packing ratio and the second predicted packing ratio.
determining first ground truth packing ratio for the first training subset and the second ground truth packing ratio for the second training subset; and (page 5, section 4.3.2, “During each learning iteration, we randomly sample two (at most) 𝐻-patch groups denoted as S’ and S’’ with ground truth packing ratios denoted as pr’ and pr’’; page 13, section B, “We summarize our training procedure in Alg. 2”; page 13, Algorithm 2: “
PNG
media_image17.png
526
572
media_image17.png
Greyscale
”). Note that: (1) Algorithm 2 is the flowchart to train the HSN; (2) the ground truth packing ratios denoted as pr’ and pr’’ are computed or determined for two 𝐻-patch groups denoted as S’ (the first training subset) and S’’ (the second training subset); and (3) pr’ and pr’’ are regarded as the first ground truth packing ratio and a second ground truth packing ratio, while HSN(S’) and HSN(S’’) are regarded as the first predicted packing ratio and the second predicted packing ratio.
updating the HSN via a margin ranking loss, the margin ranking loss being equal to L= max(0, -sgn(pr' - pr")(HSN(S') - HSN(S")) + e), pr' and HSN(S') being the first ground truth packing ratio and the first predicted packing ratio for the first subset respectively, pr" and HSN(S") being the second ground truth packing ratio and the second predicted packing ratio for the second training subset respectively, e being a minimal positive margin. (page 5, section 4.3.2, “During each learning iteration, we randomly sample two (at most) 𝐻-patch groups denoted as S’ and S’’ with ground truth packing ratios denoted as pr’ and pr’’ and HSN predicted packing ratio denoted as HSN(S’) and HSN(S’’). We then update HSN via the following margin ranking loss:
PNG
media_image18.png
74
568
media_image18.png
Greyscale
”).
Regarding claim 11, Yang in view of Onzon discloses The method of claim 1, wherein the adjusting further comprises:
adjusting the poses of the plurality of super-patches by rotating and translating the plurality of the super-patches such that the spacing between the plurality of super-patches is reduced such that the first bounding box is reduced into a second bounding box, the second bounding box corresponding to a minimized bounding size according to an optimization function such that the plurality of super-patches in the second bounding box are not overlapped. (page 6, section 4.4, “With the initially packed result from bin-packing, we then propose to adjust the joint poses of all the patches, locally squeezing
them together via numerical optimization. We denote the bounding box enclosing all the 𝑀 patches as bound(𝑝1, ・ ・ ・ , 𝑝𝑀), and our optimization problem is formulated as:
PNG
media_image19.png
90
560
media_image19.png
Greyscale
We again use the barrier function technique [Smith and Schaefer
2015] to handle all the collision-free guarantees and ensure that the
bounding box is surrounding all the patches”). Note that: after assembling the super-patches together into a first bounding box the final bounding box, the first bounding box the final bounding box contains a plurality of super-patches; (2) By adjusting the poses of the plurality of super-patches with the optimization function (“equation (3)” above), the first bounding box is further reduced into a bounding box as a second bounding box.
Claim 12 reciting “An apparatus, the apparatus comprising:”, is corresponding to the method of claim 1. Therefore, claim 12 is rejected with the same prior art and citations for claim 1.
In addition, Yang in view of Onzon discloses An apparatus, the apparatus comprising: (Yang, page 6, section 5, “We perform all experiments on a computer with an Intel E5-1650 12-core CPU at 3.60GHz and 32GB RAM”). Note that: the computer system can be regarded as an apparatus.
Claim 13 is corresponding to the method of claim 2. Therefore, claim 13 is rejected with the same prior art and citations for claim 2.
Claims 14-20 are corresponding to the method of claims 3-9, respectively. Therefore, claims 14-20 are rejected for the same rationale for claims 3-9, respectively.
Response to Arguments
Applicant's arguments with respect to claim rejection 35 U.S.C. 103, have been fully considered but they are not persuasive.
Applicant alleges, “Regarding the rejection of Claim 1 under 35 U.S.C. § 103, it is respectfully submitted that Yang and Onzon fail to disclose ‘dividing the plurality of UV patches into a primary set of the UV patches and a secondary set of the UV patches based on a comparison between a size of each of the plurality of UV patches and a patch size threshold, the patch size threshold being less than or equal to 5 pixels,’ and ‘a total number of UV patches in each of the plurality of subsets being less than or equal to 4,’” (page 12, lines 14-20). However, the arguments are respectfully mooted because the corresponding newly amended limitations, “a comparison between a size of each of the plurality of UV patches and” and “a total number of UV patches in each of the plurality of subsets being less than or equal to 4”, have been addressed in the detailed claim rejection 35 U.S.C. 103 above. The arguments are not persuasive.
Applicant alleges, “Onzon, at paragraph [013 l], discusses ‘Note that in this version we not only allow patches of odd size (like 3 by 3 pixels) but also of even size (like 2 by 2 pixels).’ That is, Onzon merely discusses patches with odd size (e.g., 3 by 3 pixels) and even size (e.g., 2 by 2 pixels). Onzon does to disclose that the odd and even size patches are divided ‘into a primary set of the UV patches and a secondary set of the UV patches based on a comparison between a size of each of the plurality of UV patches and a patch size threshold, the patch size threshold being less than or equal to 5 pixels,’ as defined in amended Claim 1.” However, Examiner respectfully disagrees about the allegations as whole because: (1) Onzon discloses the patch size threshold being less than or equal to 5 pixels; and (2) the combination of Yang and Onzon with the corresponding rationale instead of either Yang or Onzon discloses the limitations (see the details above). The arguments are not persuasive.
Applicant alleges, “Further, neither Yang nor Onzon discusses ‘a total number of UV patches in each of the plurality of subsets being less than or equal to 4,’ as defined in Claim 1. For example, at section 3, lines 13-15 of Yang, Yang merely discusses ‘We first employ a sampling-based group selector network (HSN Sec. 4.3) to identify a potential subset S' of at most H patches that can form a nearly rectangular super-patch.’ Accordingly, it is respectfully submitted that Claim 1 (and all associated dependent claims) patentably defines over Yang and Onzon." (page 13, lines 4-10) However, Examiner respectfully disagrees about the allegations as whole because Yang does disclose a total number of UV patches in each of the plurality of subsets being less than or equal to 4 as explained in details in rejection of claim 1 above. The arguments are not persuasive.
Applicant alleges, “Independent Claim 12, although differing in scope and statutory class, patentably defines over Yang and Onzon at least for reasons analogous to the reasons stated above for the patentability of Claim 1. Accordingly, it is respectfully submitted that Claim 12 (and all associated dependent claims) patentably defines over Yang and Onzon” (page 13, lines 11-14). However, Examiner respectfully disagrees about the allegations as whole because: (1) independent claim 12 is corresponding to claim1. Therefore, claim 12 is rejected for the same rationale for claim 1 above; and (2) all associated dependent claims are rejected for the respective rationale above. The arguments are not persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Biao Chen/
Patent Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611