Code Generation
++ Using nerfies you can create fun visual effects. This Dolly zoom effect + would be impossible without nerfies since it would require going through a wall. +
+ +diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..2df7cd6
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+.DS_store
+.idea
diff --git a/README.md b/README.md
index edd91ed..e84d02d 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,16 @@
+# Nerfies
+
+This is the repository that contains source code for the [Nerfies website](https://nerfies.github.io).
+
+If you find Nerfies useful for your work please cite:
+```
+@article{park2021nerfies
+ author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
+ title = {Nerfies: Deformable Neural Radiance Fields},
+ journal = {ICCV},
+ year = {2021},
+}
+```
+
# Website License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
diff --git a/deepseekcoder.html b/deepseekcoder.html
new file mode 100644
index 0000000..27ae660
--- /dev/null
+++ b/deepseekcoder.html
@@ -0,0 +1,341 @@
+
+
+
+ Deepseek Coder comprises a series of code language models trained on both 87% code and 13% natural language in English and Chinese, with each model pre-trained on 2T tokens. + We provide various sizes of the code model, ranging from 1B to 33B versions. + Each model is pre-trained on repo-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, resulting in foundational models (DeepSeek-Coder-Base). + We further fine-tune the base model with 2B tokens of instruction data to get instruction-tuned models, namedly DeepSeek-Coder-Instruct. +
++ For coding capabilities, DeepSeek-Coder-Base achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks. + And Compared to GPT35-turbo, DeepSeek-Coder-Instruct demonstrates superior performance in human evaluation while maintaining comparable performance in MBPP. +
++
++ Using nerfies you can create fun visual effects. This Dolly zoom effect + would be impossible without nerfies since it would require going through a wall. +
+ ++ As a byproduct of our method, we can also solve the matting problem by ignoring + samples that fall outside of a bounding box during rendering. +
++ Using nerfies you can create fun visual effects. This Dolly zoom effect + would be impossible without nerfies since it would require going through a wall. +
+ ++ As a byproduct of our method, we can also solve the matting problem by ignoring + samples that fall outside of a bounding box during rendering. +
+ ++ We evaluate DeepSeek Coder on various coding-related benchmarks. + The result shows that DeepSeek-Coder-Base-33B significantly outperforms existing open-source code LLMs. + Compared with CodeLLama-34B, it leads by 7.9%, 9.3%, 10.8% and 5.9% respectively on HumanEval Python, HumanEval Multilingual, MBPP and DS-1000. + Surprisingly, our DeepSeek-Coder-Base-7B reaches the performance of CodeLlama-34B. + And the DeepSeek-Coder-Instruct-33B model after instruction tuning outperforms GPT35-turbo on HumanEval and achieves comparable result with GPT35-turbo on MBPP. +
++ Apart from its proficiency in coding, DeepSeek Coder also demonstrates outstanding mathematical and reasoning abilities. + Without instruction tuning, the DeepSeek-Coder-Base model has shown impressive performance in mathematical reasoning evaluations. +
++ If you have any questions, please raise an issue or contact us at agi_code@deepseek.com. +
+